Virtual Reality Training for Laparoscopic Skills

The explosion of laparoscopic procedures in urology today led by such pioneers as Gill, Clayman, Guilloaneau, and Kavoussi are being associated with clear health benefits to Urology patients when compared to the open techniques. As laparoscopic approaches are validated as the standard of care, the dissemination, acquisition and certification of laparoscopic skills presents itself as a huge challenge. The American College of Surgeons recommends the granting of clinical privileges based upon evaluation of education, training, experience, and demonstrated current competence. The Society of American Gastrointestinal Endoscopic Surgeons expands this to include credentialing in diagnostic laparoscopy, hands-on experience either in a residency/fellowship or recognized didactic course, and observation of cases by a mentor proctor.

An additional problem is that urology lacked a procedure common enough to adequately maintain particular skills. Laparoscopic pelvic lymph node dissection in the 1990s paved the way for laparoscopic renal, prostate, and bladder procedures. Laparoscopic fellowships and mini-fellowships have been successful in the early adopters of this technology. Training courses with hands-on, porcine, cadaver, inanimate, and virtual reality simulators have emerged as attractive alternatives but must be associated with mentored learning opportunities for quality assurance. The summary of pertinent models and validation studies primarily revolves around laparoscopic cholecystectomy and may or may not be translatable to urological applications.

Inanimate Models (Box and Video Trainers)

For basic laparoscopic skills acquisition, inanimate trainers consisting of box and/or video trainers have been developed and utilized in urology programs and are usually relatively inexpensive, but all of them require mentored supervision for the purposes of data acquisition and assessment with its inherent cost. Some examples of inanimate trainers who have been validated include the "Rosser drills," which showed improvement in movements between the first and fourth trial (73,74). Reznick et al. (75,76) validated a "bench station" examination, which includes a global assessment scale that is widely used. The University of Kentucky established face and construct validity of their inanimate models for laparoscopic skills to perform laparoscopic appendectomy, cholecystectomy, and herniorrhaphy (77). The LTS 2000 (Hasson et al.) physical model simulator showed a positive correlation between the hours of practice on the simulator and basic gynecologic laparoscopic maneuvers and prospectively was able to reliably and reproducibly detect different levels of laparoscopic expertise in general surgery and OBGYN residents (78-80). A pilot study for laparoscopic urethrovesical anastomosis was performed by Katz et al., where they report a training program consisting of (1) passing a 30 cm polyglactin ligature between two needle holders, (2) intracorporeal knot tying, (3) intracorporeal suturing, (4) performing a linear anastomosis, and (5) performing a circular running anastomosis. Chicken skin and cardboard were used for this model. This is a very pertinent application, but validation studies are lacking. Other newer generation box video trainers include LapMan.l

In an objective scoring system for laparoscopic cholecystectomy done in a multi-institutional fashion by Eubanks et al. (81), the moves of a laparoscopic procedure were dissected into distinct goals associated with distinct deviations from the proper procedure (errors). This objectivity was based on reliable subjective assessment of videotaped lSimulab, Seattle, WA.

procedures. Derossis et al. from McGill University and Coleman et al. at University of Texas Southwestern established construct validity for their training model as well.

Rosen et al. at the University of Washington designed a robot called the Blue Dragon that, when connected to the instruments, is able to track all motion and translate movements into signatures providing true objective assessment. Their objective was to develop a skill scale using statistical Markov models (10,32,82-84). Five novice surgeons and five expert surgeons performed two minimally invasive surgical procedures (cholecystectomy and Nissen fundoplication) in a porcine model. An instrumented laparoscopic grasper equipped with a three-axis force/torque sensor was used to measure the forces/torques at the hand/tool interface synchronized with endoscopic video of the operative maneuvers (Fig. 13). Three types of analysis were performed on the raw data: (1) video analysis encoding the type of tool-tip/tissue interaction, (2) vector quantization encoding the force/torque data into clusters (signatures), and (3) Markov modeling for evaluating surgical skill level. The video analysis was performed by two expert surgeons encoding the video of each step of the surgical procedure frame by frame (NTSC—30 frames per second). The expert encoding process used a taxonomy of 14 different tool maneuvers (Table 4). These 14 interactions encompass all the possible tool/tissue interactions identified during our previous video analysis of surgical procedures. Each identified surgical tool/tissue interaction had a unique F/T pattern. For example, in the laparo-scopic cholecystectomy, isolation of the cystic duct and artery involves performing repeated pushing and spreading maneuvers (PS-SP—Table 4), which are accomplished by applying pushing forces mainly along the Z-axis (Fz) and spreading forces (F ) on the handle. These 14 tool/tissue interactions allowed us to encode each surgical procedure.

Virtual Reality Models For Laparoscopy

Schijven and Jakimowicz (85) did a thorough review of virtual reality laparoscopic simulators as of October, 2003. They surveyed eight of the main commercially active virtual reality companies regarding their laparoscopic products. There results are summarized in Table 5.

Immersion Medical: Gaithersburg, MD (www.immersionmedical.com)

■ Medical Education Trainers Incorporated (METI), Cincinnati, OH (www.meti.com)

■ Mentice: Goteburg, Sweden (www.mentice.com)

■ Simbionix: Cleveland, OH (www.simbionix.com)

■ Simulab: Seattle, WA (www.simulab.com)

FIGURE 13 ■ Instrumented endoscopic grasper. (A) Three-axis force/torque sensor (modified ATI-Mini model) implemented on the outer tube of a 10 mm reusable Storz grasper equipped with interchangeable tips (Babcock, curved dissector, and atraumatic grasper) and a force sensor located on the instrument handle. (B) Real-time user interface of force/torque information synchronized with the endoscopic view of the procedure using picture-in-picture mode. Source: Rosen et al., 1999.

FIGURE 13 ■ Instrumented endoscopic grasper. (A) Three-axis force/torque sensor (modified ATI-Mini model) implemented on the outer tube of a 10 mm reusable Storz grasper equipped with interchangeable tips (Babcock, curved dissector, and atraumatic grasper) and a force sensor located on the instrument handle. (B) Real-time user interface of force/torque information synchronized with the endoscopic view of the procedure using picture-in-picture mode. Source: Rosen et al., 1999.

TABLE4 ■ Definition of Tool/Tissue Interactions and the Corresponding Directions of Forces and Torques Applied During MIS

Tool/tissue interaction Type Acronym Force/torque

TABLE4 ■ Definition of Tool/Tissue Interactions and the Corresponding Directions of Forces and Torques Applied During MIS

Tool/tissue interaction Type Acronym Force/torque

x

Fy

z

x

Ty

Tz Fg

Idle I

ID

NA

NA

NA

NA

NA

NA NA

Grasping I

GR

+

Spreading I

SP

-

Pushing I

PS

-

Sweeping (lateral retraction) I

SW

±

±

±

±

Grasping-pulling I

GR-PL

+

+

Grasping-pushing I

GR-PS

-

+

Grasping-sweeping II

GR-SW

±

±

±

±

+

Pushing-spreading II

PS-SP

-

-

Pushing-sweeping II

PS-SW

±

±

-

±

±

Sweeping-spreading II

SW-SP

±

±

±

±

-

Grasping-pulling-sweeping I

I GR-PL-SW

±

±

+

±

±

+

Grasping-pushing-sweeping II

I GR-PS-SW

±

±

-

±

±

+

Pushing-sweeping-spreading II

I PS-SW-SP

±

±

-

±

±

-

Abbreviation: NA, not applicable. Source: Rosen et al. 1999.

Simquest: Silver Spring, MD (www.simquest.com)

■ ReachIn Technologies: Stockholm, Sweden (www.reachin.se)

■ Surgical Science: Goteburg, Sweden (www.surgical-science.com)

■ Red Llama Technology Group, LLC: Seattle, WA, cognitive simulation (www.redllamatech.com)

■ Mimic Technologies: Seattle, WA, haptics (www.mimic.ws)

The minimally invasive surgical trainer-virtual reality (Mentice, Sweden) deserves special mention as it has undergone the most rigorous validation studies. Minimally invasive surgical trainer-virtual reality system runs on a desktop PC (400 MHz Pentium II, 64 Mb RAM) with tasks viewed on a 17 in cathode ray tube monitor. The frame rate is approximately 15 frames/sec. The laparoscopic instrumentsm provided six degrees of freedom and a foot pedal is provided to provide diathermy. Abstract targets appear within the operating space according to the specific skill task selected and can be grasped and manipulated with virtual instruments. Each task is objectively scored and quantified. Seymour, Gallagher, and Satava performed a series of validation studies reliably confirming face, content, construct, discriminate and predictive validity (86-89). Their "Virtual Reality to OR" predictive validity study whereby they prospectively randomized 16 residents to training versus no training and established baseline video performance metrics on laparo-scopic cholecystectomy in the operating room both before and after the intervention is a landmark study in simulation validation (89). Grantcharov et al. (90) confirmed these findings in their Virtual Reality to OR study. Virtual Reality to OR of minimally invasive surgical trainer-virtual reality has been expanded to a multi-institutional predictive validity study. Gallagher et al. did use minimally invasive surgical trainer-virtual reality

TABLE5 ■ Surgical Simulation Companies

Validation in urology

TABLE5 ■ Surgical Simulation Companies

Validation in urology

Face

Content

Construct

Concurrent

Discriminate

Predictive

UroMentor

Yes

No

Yes

Yes

±

±

PercMentor

Yes

No

No

No

No

No

UW TURP

Yes

Yes

Yes

No

Yes

No

MIST VRa

Yes

Yes

Yes

Yes

Yes

Yes

Pelvic LND

Yes

Yes

No

No

No

No

aValidated for general surgical applications, not urology applications.

aValidated for general surgical applications, not urology applications.

"Immersion Corp., San Jose, CA.

to study controls versus urology trainees versus urology consultants with the intent to determine whether or not minimally invasive surgical trainer-virtual reality could be useful in aptitude testing. They found that minimally invasive surgical trainer-virtual reality, which is primarily a visual-spatial assessment tool, was not useful for this purpose (91). Various modules for the Procedicus, minimally invasive surgical trainer-virtual reality system are available. There is a KSA module which provides an abdominal environment for more advanced laparoscopic training such as scope and instrument navigation, pick and pass, cutting, suturing, needle passing, and diathermy. Arthroscopy, urology, gynecology, interventional cardiology, and radiology are available for the Procedicus platform. Force feedback is optional (www.mentice.com).

LapMentorn is an upper-end virtual reality laparoscopic trainer, which allows for the completion of entire general laparoscopic procedures. There are basic skills task modules such as instrument navigation, object manipulation, clipping, and cutting. Virtual patient cases are also present with accountability for different port placement sites. They acquired and employed Xitact's force feedback system to provide haptic feedback. Urologic applications are under development and validation studies are preliminary at this point (www.simbionix.com).

LapSimo offers nine basic laparoscopic skills training modules ranging from navigation to grasping, cutting, clip applying, and suturing. Force feedback is optional. Construct validity has been mixed. Duffy et al. (92) established construct validity for the simulator to distinguish between novices, trainees, and experts. Ro et al. (93) with a smaller number of subjects and only looking at one trial, actually showed that novices outperformed experts on instrumentation, suturing, and dissection modules. In a prospective design, however, naive subjects trained on the LapSim virtual-reality part-task trainer performed better on live surgical tasks in a porcine model as compared with those trained with a traditional box trainer (94). The software was recently upgraded and modules to include more advanced skill training and gynecologic procedures added (www.surgical-science.com).

The Haptica ProMisp trainer is a PC-based hybrid box and virtual reality trainer for laparoscopic skills. The Haptica trainer has undergone content and construct validity studies by Emory and the Imperial College of London. Significant differences between the performance of laparoscopic cholecystectomy subtasks were found between novices, trainees, and experts (95). Interestingly enough, Gallagher et al. found that older subjects (ages 60-69) performed significantly worse than younger subjects (ages 30-39, 40-49) on the box-trainer task for correct incisions (13.1 vs. 19.3, p < 0.008) and incorrect incisions (12.3 vs. 2.5, p > 0.05). They also performed worse on the virtual reality task for time (132 vs. 71, p < 0.05), error (99 vs. 41, p < 0.05) and economy of movement (22.8 vs. 11.7, p < 0.05) (www.haptica.com).

As laparoscopic skills become more widely disseminated, laparoscopic skills trainers representing complete procedures for laparoscopic urology procedures are desperately needed and are currently under development by numerous universities and simulation companies.

0 0

Post a comment