DICOM PS3.17 2025b - Explanatory Information

PPPP.7 Example of DICOM Real-Time Video Implementation

The following example illustrates a specific implementation of the Generic Use Case 4: Augmented Reality described above.

Example of implementation for Augmented reality based on optical image

Figure PPPP.7-1. Example of implementation for Augmented reality based on optical image


The described use case is the replacement of the lens in cataract surgery (capsulorhexis). The lenses are manufactured individually, taking into account the patient's astigmatism. The best places for the incision, the position where the capsule bag should be torn and the optimal alignment for the new lens are calculated and a graphical plane is overlaid onto the optical path of the microscope to assist the surgeon, as shown in Figure PPPP.7-1.

Some solutions consist of a frame grabber in ophthalmology microscopes which grab video frames at 50 / 60 Hz. These frames are analyzed to identify the position and orientation of the eye and then a series of graphical objects are superimposed as a graphical plane onto the optical path to show the surgeon the best place to perform the incisions and how to orient the new lens to compensate the astigmatism.

Practically, the video frame grabbing takes 3 frames to be accessible to the image processor computing the series of graphical objects to be drawn as overlays on the optical image. It results in a delay between the frame used to create the objects and the one on which these objects are drawn. For safety reasons, it is important to record what the surgeon has seen. Due to the latency of the frame grabbing and the calculation of the positions of these graphical objects, the digital images are delayed in memory to also blend these objects onto the right digital image for the recording made in parallel.

DICOM Real-Time Video enables the storage of the recorded video and the frame by frame positions of these graphical objects separately. It might also be used to store other values associated with the streams such as the microscope's zoom, focus and light intensity values or the phaco's various settings, pressure, in the DICOM-RTV Metadata Flow. These separately stored flows could be later mixed together to aid in post-operative analysis or for teaching purposes. It would be possible to re-play the overlay either on the later image where the surgeon saw it, or on the image it was calculated from, to improve the algorithm. It would also reduce the workload of the machine during the operation because the blending of the video together with the display aids would be performed later during the post-operative analysis phase, and also maintain the original images.

The RTP Timestamp (RTS) of both video and DICOM-RTV Metadata Flows must match. Frame Origin Timestamp (FOTS) contained in DICOM-RTV Metadata must be consistent with RTP Timestamp, enabling the proper synchronization between flows. As shown in Figure PPPP.7-2, it is expected that the Frame Origin Timestamp relative of both the digital image and the overlays are set to T6 when the Image Datetime is T3 and the Referenced Image Datetime of the Mask is T0, represented as the T0 MASK.

Example of implementation for Augmented reality based on optical image

Figure PPPP.7-2. Example of implementation for Augmented reality based on optical image


Note

In the case the surgeon is viewing the digital image and not the optical image, the approach could be different, as shown in Figure PPPP.7-3.

Example of implementation for Augmented reality based on digital image

Figure PPPP.7-3. Example of implementation for Augmented reality based on digital image


DICOM PS3.17 2025b - Explanatory Information