## Step 8 Build and Render the Model

There are many modeling software programs including Maya®,b Rhinoceros-three-dimensional®,c etc, that allow models to be constructed and then exported for use in virtual environments. Often, a number of applications are used to create a complete set of simulation models.

The simplest models are based on two-dimensional images or pictures. This is often employed when simulating fluoroscopic procedures.

For example, a single image, or series of images, captured during a real procedure may be augmented during simulation to give the impression that the image or images are being captured in real time. The movement of a guide wire may be rendered on top of a fluoroscopy picture by updating only the pixels that represent the guide wire.

Creating three-dimensional models of tissue structures can be quite challenging. Typically, a set of medical images is sectioned, where contours are extracted from a structure of interest (using applications such as NIH Image and ScanTool). These two-dimensional contours are then used to create a three-dimensional model. The contours might simply be used to guide the artistic creation of a three-dimensional model using freeform modeling software. Another option is to connecting the contours with polygons to form a surface. It is also possible to fit a set of equations to the contours in order to define a surface or solid model.

Polygonal models are the most common models used in three-dimensional visualization.

They form a computationally "cheap" representation and are very versatile. Hardware-accelerated rendering of polygons is common. The most general class of polygon models can be called a "polygon soup," which is a collection of polygons that are not geometrically connected and have no topology available. If the polygons form a closed manifold, then the model has well-defined inside and outside, which is very helpful when performing collision detection. If the manifold is convex, then this structure is even better suited for collision detection algorithms.

Parametric surface models are also popular. These surfaces are defined by equations, such as nonuniform rational B-splines. To render the geometries defined by these bAlias Systems Corp, Toronto, Canada. cRobert McNeel and Associates, Seattle, WA.

One example of a widely used database enabling the construction of volumetric models is the National Library of Medicine's Visible Human Project.

An extension of element-based models is an implicit representation.

representations, the equations are used to find a set of polygons to represent the object surface. One big advantage of parameterized surfaces is that polygon resolution can be set to match the rendering capabilities of a particular computer. In addition, collision detection is very efficient with parametric surfaces.

Volumetric models can be constructed of voxels, elements or implicit primitives. Unlike surface models, volumetric models include information about the interior of the geometry. This can be very helpful for governing collision detection and deformation modeling.

If a model is to be used only for visualization, a voxel representation is likely the most appropriate. A voxel is essentially a small cube (like a three-dimensional pixel) that is uniform in color and shading. Voxelized models are three-dimensional representations created by stacking together a series of two-dimensional image slices (such as computer tomography, magenatic resonance imaging, or ultrasound images). The regions in-between the slices are filled in by interpolating, or morphing, from one image to the next. Most medical imaging systems are capable of generating three-dimensional voxel models. Unfortunately, voxel models lack a surface definable with normal vectors. This can make it difficult to apply lighting effects (such as shading and reflection) and to enable collision detection and force feedback. Voxel representations can also require a tremendous amount of computer memory and storage.

One example of a widely used database enabling the construction of volumetric models is the National Library of Medicine's Visible Human Project.

This database consists of detailed sets of scans taken of a deceased human subject. Not only does this data set include computer tomography and magenatic resonance images, the subject was also physically sectioned (frozen and then sliced into thin slabs) so that pictures could be taken of the slices.

A greater degree of interaction can be achieved with volumetric models based on elements. An element might take the shape of a tetrahedron or cube. The elements are then pieced together, like a set of logos to achieve a three-dimensional volumetric model. Each element may have its own color, shape, and specific behavior. Element models are ideally suited for deformation modeling.

An extension of element-based models is an implicit representation. Rather than using thousands of small elements to form a model, a small number of implicit primitives can be blended together to form complex geometric structures. Examples of implicit primitives are cubes, spheres, cones, cylinders, etc. These objects can be defined with simple equations that determine if a coordinate in three-dimensional space is inside, outside or on the surface of the object. A number of different implicit primitives can be used to create a complex shape using Boolean operations (union, difference, subtraction). The big advantage of implicit models is that collision detection is very efficient. A disadvantage is that the implicit equations must still be converted to a set of polygons that can be rendered. Finding these polygons can be very computationally intensive. It should be noted that implicit surfaces (a two-dimensional version of an implicit solid) sometimes have applications in simulation. Implicit surfaces are built from primitives such as triangles, squares, circles, etc. A closed manifold convex polygon model is essentially an implicit surface model.

A more detailed delineation of the types of volumetric models is illustrated in Figure 3.

Once the model has been created, it needs to be rendered. Rendering engines such as Open graphics library, Sun Systems, Direct X (Microsoft), etc. automatically determine three-dimensional perspective, lighting, shading, and allows the developer to apply texture images (Fig. 4).

## Post a comment