DICOM PS3.17 2024d - Explanatory Information |
---|
Rendered Volume Resources enable a user agent to request a server-side 3D volumetric rendering. The user agent communicates the desired rendering by providing Query Parameters or a Volumetric Presentation State within the RESTful request. The origin server then resamples the Target Resource of DICOM instances into Volume Data, applies the provided parameters, and returns the representation in the requested Media Type.
Volumetric Rendering Query Parameters control basic functions that can be used independently, or in combination, to render a volume of Input Instances upon a GET request. Other advanced functions are enabled by referencing a Presentation State containing input instances or frames, rendering, presentation, graphic annotation, animation, cropping and segmentation parameters defined prior to a GET request. Basic and advanced functions are summarized in Table XXX.7-1
Table XXX.7-1. Basic and Advanced Web Services Functionality
A CT study is being reviewed on a web-based lightweight viewer. The viewer includes a hanging protocol that displays a coronal MPR as the optimal plane to view the anatomy of interest. The coronal view is presented as a thick slab MIP image to better present contrast enhanced vasculature. To obtain this image, the viewer submits a RESTful service request specifying a rendering mode, slab thickness, spacing, and media type. The origin server renders the referenced CT images based on the requested parameters and returns the result in the requested media type. The viewer presents the images.
The user agent identifies input instances with geometric consistency, which are then assembled into volume data by the origin server. Algorithm and display parameters are applied to the volume data in order to achieve the requested presentation, and lastly, the representation is encoded into one or more images of the requested media type and returned in a response payload to the user agent.
Figure XXX.7-2 shows the rendering pipeline for a simple volume and how various parts of the request URL correspond to various rendering details. Details of each step are described in the subsections that follow.
Volumetric rendering applications require 2D slice data input. For the origin server to render the data as a volume, the input slices require a degree of consistency, such as a common patient frame of reference, pixel attributes (rows, columns, bit depth) and spatial alignment. Slices may possess Z-axis overlap and/or gaps. DICOM defines the requirements for collections of frames that make up Volumetric Source Information in the Presentation Input Type Volume Input Requirements in Section C.11.23.1 in PS3.3.
In this example, three CT acquisitions through the liver are obtained, each corresponding to a contrast phase (arterial, portal-venous and venous). All images are in a single series of Legacy CT Image objects. The scanner used to acquire the images increments Acquisition Number (0020,0012) for each contrast phase in the series:
The user agent identifies the desired phase by requesting the Acquisition Number value "2", corresponding to the portal-venous contrast phase. The origin server identifies the subset of instances within the Target Resource having the requested Acquisition Number, determines that they meet the Presentation Input Type Volume Input Requirements, and proceeds to prepare the Volume Data.
Volumetric Source Information is used to prepare Volume Data. Simple Volume Data consists of a contiguous set of frames at a single point in time. A simple volume is also referred to as 3D, in which each of the three dimensions represent a spatial axis (x, y and z).
In this example, the origin server assembles the pixel data from the identified instances into a simple volume as depicted in Figure XXX.7-3.
The Volume Data is presented using a display algorithm, such as Volume Rendering (VR), Maximum Intensity Projection (MIP), and Multiplanar Planar Rendering (MPR).
In this example, the user agent requests a 5-millimeter thick, average intensity projection MPR. The origin server applies an "average_ip" algorithm, a method that projects the mean intensity of all interpolated samples in the path of each ray traced from the viewpoint to the plane of projection.
Presentation parameters define either:
In this example, the user agent requested an anterior view. Since an image media type, not a video media type, is requested in the Accept header field, and there is only one volume, the origin server creates a view of a fixed coronal orientation at a default location within the volume.
In the last step of the pipeline, the rendered view is encoded using an Acceptable Media Type and returned in the response payload.
In this example, the user agent requests "image/jpeg" in the Accept header field. In response, the origin server returns a representation of the MPR as a single frame JPEG image.
A temporal MRI study (consisting of 5 Dynamic Contrast Enhanced phases of the breast) is being reviewed on a web-based lightweight viewer. The viewer includes a hanging protocol that displays a 3D MIP. To obtain the 3D MIP, the viewer submits a RESTful service request specifying the Instances to be rendered, rendering mode, orientation, animation and media type. The origin server renders the referenced MR images based on the requested parameters and returns the result in the requested media type. The viewer presents the images.
Figure XXX.7-5 shows the rendering pipeline for temporal volumes and how various parts of the request URL correspond to various rendering details. Details of each step are described in the subsections that follow. For brevity, only 2 volumes are shown.
In this example, the first phase is non-contrast, phases 2-5 are contrast enhanced. All phases are encoded in a single Enhanced MR object. Phases are identified by the Temporal Position Index (0020, 9128).
The user agent identifies the desired phases by requesting the Temporal Position Index values "2-5" corresponding to the contrast enhanced phases. The origin server identifies the frames within the Target Resource having the requested Temporal Position Index, determines that they meet the Presentation Input Type Volume Input Requirements, and proceeds to prepare the Volume Data.
Multi-volume data consists of two or more simple volumes that are related and rendered simultaneously. Each time point is represented as a simple volume that meets the Volume Input Requirements.
In this example, the origin server assembles the pixel data of the matching frames into four simple volumes, one for each timepoint, as depicted in Figure XXX.7-6.
In this example, the user agent requests a 3D MIP. he origin server applies a "maximum_ip" algorithm, a method that projects each volume with the maximum intensity of the samples that falls in the path of each ray traced from the viewpoint to the plane of projection.
In this example, the user agent requested a top-down view. As a video was requested, and no animation parameters were provided to specify the rotation of the 3D volumes, the origin server chooses not to apply any spatial animation. Instead, it applies a temporal animation, displaying each volume sequentially at a frame rate of 1fps.
DICOM PS3.17 2024d - Explanatory Information |
---|