# Physics - Relaxation Methods In this model, the position of each vertex is recalculated for each frame. Its position is the result of two competing sets of forces:

1. The vertices all repel each other according to the inverse square law (regardless of whether they are connected by edges)
2. Each edge tries to move toward its nominal length.

Eventually, under the action of these forces the object (blob) will come to an equilibrium and its shape will stabilise. If there are any changes, for instance another object comes close, then the equilibrium will be disturbed and the shape will change.

An object to be modeled is made up of a number of vertex points, these vertices are connected to other vertexes by edges. Edges have a length, but the actual length can vary under the action of the above forces, so think of edges as bonds or springy struts.

So to define the body, all we need is a list of edges, for each edge we specify the vertex at each end and a nominal length. But if we define the 'blob' in this way there are problems:

• The starting position, and orientation, of the blob is not known until the program is run
• If the vertices are not very richly interconnected by edges, then the object might flip inside-out. So we can't be sure which side of each face will render.
• Its difficult to build a 'blob' from an existing mesh, such as a VRML IndexedFaceSet.

To get round these problems, blobBean has been implemented as an extension of ifsBean. The object can then be rendered as a geometry array object with vertices of the geometry generated every frame from the coordinates calculated by the relax method. The advantage of this is that the initial positions of the vertexes are set before the forces are calculated, so at least we know the starting point. The edge lengths can be calculated from these initial positions, so this way, there is no need to specify the edge lengths.

I found it useful to add two new parameters to the IndexedFaceSet, nodeIndex and Edges. This is so that we can specify which nodes and edges take part in the calculations, so that some parts of the shape can be rigid and other parts can move. The other advantage is that we can have vertices which are not used for rendering but for structure, for example, some of the 'vertices' might be in the middle of the body for mechanical stability. Without the extra parameters it could be very difficult working out which of the vertices are needed for the rendering model. This seems similar to modeling the atoms and the bonds between them. Modeling every atom in a body might require more computing power than is likely to be available, but why not have one vertex to every million, or every billion atoms?