Table of Contents
List of Figures
List of Tables
List of Examples
The information in this publication was considered technically sound by the consensus of persons engaged in the development and approval of the document at the time it was developed. Consensus does not necessarily mean that there is unanimous agreement among every person participating in the development of this document.
NEMA standards and guideline publications, of which the document contained herein is one, are developed through a voluntary consensus standards development process. This process brings together volunteers and/or seeks out the views of persons who have an interest in the topic covered by this publication. While NEMA administers the process and establishes rules to promote fairness in the development of consensus, it does not write the document and it does not independently test, evaluate, or verify the accuracy or completeness of any information or the soundness of any judgments contained in its standards and guideline publications.
NEMA disclaims liability for any personal injury, property, or other damages of any nature whatsoever, whether special, indirect, consequential, or compensatory, directly or indirectly resulting from the publication, use of, application, or reliance on this document. NEMA disclaims and makes no guaranty or warranty, expressed or implied, as to the accuracy or completeness of any information published herein, and disclaims and makes no warranty that the information in this document will fulfill any of your particular purposes or needs. NEMA does not undertake to guarantee the performance of any individual manufacturer or seller's products or services by virtue of this standard or guide.
In publishing and making this document available, NEMA is not undertaking to render professional or other services for or on behalf of any person or entity, nor is NEMA undertaking to perform any duty owed by any person or entity to someone else. Anyone using this document should rely on his or her own independent judgment or, as appropriate, seek the advice of a competent professional in determining the exercise of reasonable care in any given circumstances. Information and other standards on the topic covered by this publication may be available from other sources, which the user may wish to consult for additional views or information not covered by this publication.
NEMA has no power, nor does it undertake to police or enforce compliance with the contents of this document. NEMA does not certify, test, or inspect products, designs, or installations for safety or health purposes. Any certification or other statement of compliance with any health or safety-related information in this document shall not be attributable to NEMA and is solely the responsibility of the certifier or maker of the statement.
This DICOM Standard was developed according to the procedures of the DICOM Standards Committee.
The DICOM Standard is structured as a multi-part document using the guidelines established in [ISO/IEC Directives, Part 2].
PS3.1 should be used as the base reference for the current parts of this Standard.
DICOM® is the registered trademark of the National Electrical Manufacturers Association for its standards publications relating to digital communications of medical information, all rights reserved.
HL7® and CDA® are the registered trademarks of Health Level Seven International, all rights reserved.
SNOMED®, SNOMED Clinical Terms®, SNOMED CT® are the registered trademarks of the International Health Terminology Standards Development Organisation (IHTSDO), all rights reserved.
LOINC® is the registered trademark of Regenstrief Institute, Inc, all rights reserved.
This Part of the DICOM Standard contains explanatory information in the form of Normative and Informative Annexes.
The following standards contain provisions which, through reference in this text, constitute provisions of this Standard. At the time of publication, the editions indicated were valid. All standards are subject to revision, and parties to agreements based on this Standard are encouraged to investigate the possibilities of applying the most recent editions of the standards indicated below.
[ISO/IEC Directives, Part 2] 2016/05. 7.0. Rules for the structure and drafting of International Standards. http://www.iec.ch/members_experts/refdocs/iec/isoiecdir-2%7Bed7.0%7Den.pdf .
[IHE RAD TF-1] 2020. Integrating the Healthcare Enterprise Radiology Technical Framework Volume 1 Integration Profiles. http://www.ihe.net/uploadedFiles/Documents/Radiology/IHE_RAD_TF_Vol1.pdf .
[IHE RAD TF-2] 2020. Integrating the Healthcare Enterprise Radiology Technical Framework Volume 2 Transactions. http://www.ihe.net/uploadedFiles/Documents/Radiology/IHE_RAD_TF_Vol2.pdf .
[RFC7233] June 2014. Hypertext Transfer Protocol (HTTP/1.1): Range Requests. http://tools.ietf.org/html/rfc7233 .
For the purposes of this Standard the following definitions apply.
This Part of the Standard makes use of the following terms defined in PS3.1:
This Part of the Standard makes use of the following terms defined in PS3.2:
This Part of the Standard makes use of the following terms defined in PS3.3:
This Part of the Standard makes use of the following terms defined in PS3.4:
This Part of the Standard makes use of the following terms defined in PS3.5:
This Part of the Standard makes use of the following terms defined in PS3.3:
Terms listed in Section 3 are capitalized throughout the document.
This Annex was formerly located in Annex E “Explanation of Patient Orientation (Retired)” in PS3.3 in the 2003 and earlier revisions of the Standard.
This Annex provides an explanation of how to use the patient orientation data elements.
As for the hand, the direction labels are based on the foot in the standard anatomic position. For the right foot, for example, RIGHT will be in the direction of the 5th toe. This assignment will remain constant through movement or positioning of the extremity. This is also true of the HEAD and FOOT directions.
This Annex was formerly located in Annex G “Integration of Modality Worklist and Modality Performed Procedure Step in the Original DICOM Standard (Retired)” in PS3.3 in the 2003 and earlier revisions of the Standard.
DICOM was published in 1993 and effectively addresses image communication for a number of modalities and Image Management functions for a significant part of the field of medical imaging. Since then, many additional medical imaging specialties have contributed to the extension of the DICOM Standard and developed additional Image Object Definitions. Furthermore, there have been discussions about the harmonization of the DICOM Real-World domain model with other standardization bodies. This effort has resulted in a number of extensions to the DICOM Standard. The integration of the Modality Worklist and Modality Performed Procedure Step address an important part of the domain area that was not included initially in the DICOM Standard. At the same time, the Modality Worklist and Modality Performed Procedure Step integration make steps in the direction of harmonization with other standardization bodies (CEN TC 251, HL7, etc.).
The purpose of this Annex is to show how the original DICOM Standard relates to the extension for Modality Worklist Management and Modality Performed Procedure Step. The two included figures outline the void filled by the Modality Worklist Management and Modality Performed Procedure Step specification, and the relationship between the original DICOM Data Model and the extended model.
Figure B-1. Functional View - Modality Worklist and Modality Performed Procedure Step Management in the Context of DICOM Service Classes
The management of a patient starts when the patient enters a physical facility (e.g., a hospital, a clinic, an imaging center) or even before that time. The DICOM Patient Management SOP Class provides many of the functions that are of interest to imaging departments. Figure B-1 is an example where one presumes that an order for a procedure has been issued for a patient. The order for an imaging procedure results in the creation of a Study Instance within the DICOM Study Management SOP Class. At the same time (A) the Modality Worklist Management SOP Class enables a modality operator to request the scheduling information for the ordered procedures. A worklist can be constructed based on the scheduling information. The handling of the requested imaging procedure in DICOM Study Management and in DICOM Worklist Management are closely related. The worklist also conveys patient/study demographic information that can be incorporated into the images.
Worklist Management is completed once the imaging procedure has started and the Scheduled Procedure Step has been removed from the Worklist, possibly in response to the Modality Performed Procedure Step (B). However, Study Management continues throughout all stages of the Study, including interpretation. The actual procedure performed (based on the request) and information about the images produced are conveyed by the DICOM Study Component SOP Class or the Modality Performed Procedure Step SOP Classes.
Figure B-2. Relationship of the Original Model and the Extensions for Modality Worklist and Modality Performed Procedure Step Management
Figure B-2 shows the relationship between the original DICOM Real-World model and the extensions of this Real-World model required to support the Modality Worklist and the Modality Performed Procedure Step. The new parts of the model add entities that are needed to request, schedule, and describe the performance of imaging procedures, concepts that were not supported in the original model. The entities required for representing the Worklist form a natural extension of the original DICOM Real-World model.
Common to both the original model and the extended model is the Patient entity. The Service Episode is an administrative concept that has been shown in the extended model in order to pave the way for future adaptation to a common model supported by other standardization groups including HL7, CEN TC 251 WG 3, CAP-IEC, etc. The Visit is in the original model but not shown in the extended model because it is a part of the Service Episode.
There is a 1 to 1 relationship between a Requested Procedure and the DICOM Study (A). A DICOM Study is the result of a single Requested Procedure. A Requested Procedure can result in only one Study.
A n:m relationship exists between a Scheduled Procedure Step and a Modality Performed Procedure Step (B). The concept of a Modality Performed Procedure Step is a superset of the Study Component concept contained in the original DICOM model. The Modality Performed Procedure Step SOP Classes provide a means to relate Modality Performed Procedure Steps to Scheduled Procedure Steps.
This Annex was formerly located in Annex J “Waveforms (Informative)” in PS3.3 in the 2003 and earlier revisions of the Standard.
Waveform acquisition is part of both the medical imaging environment and the general clinical environment. Because of its broad use, there has been significant previous and complementary work in waveform standardization of which the following are particularly important:
Specification for Transferring Digital Neurophysiological Data Between Independent Computer Systems
Standard Communications Protocol for Computer-Assisted Electrocardiography (SCP-ECG).
For DICOM, the domain of waveform standardization is waveform acquisition within the imaging context. It is specifically meant to address waveform acquisitions that will be analyzed with other data that is transferred and managed using the DICOM protocol. It allows the addition of waveform data to that context with minimal incremental cost. Further, it leverages the DICOM persistent object capability for maintaining referential relationships to other data collected in a multi-modality environment, including references necessary for multi-modality synchronization.
Waveform interchange in other clinical contexts may use different protocols more appropriate to those domains. In particular, HL7 may be used for transfer of waveform observations to general clinical information systems, and MIB may be used for real-time physiological monitoring and therapy.
The waveform information object definition in DICOM has been specifically harmonized at the semantic level with the HL7 waveform message format. The use of a common object model allows straightforward transcoding and interoperation between systems that use DICOM for waveform interchange and those that use HL7, and may be viewed as an example of common semantics implemented in the differing syntaxes of two messaging systems.
HL7 allows transport of DICOM SOP Instances (information objects) encapsulated within HL7 messages. Since the DICOM and HL7 waveform semantics are harmonized, DICOM Waveform SOP Instances need not be transported as encapsulated data, as they can be transcoded to native HL7 Waveform Observation format.
The following are specific use case examples for waveforms in the imaging environment.
Case 1: Catheterization Laboratory - During a cardiac catheterization, several independent pieces of data acquisition equipment may be brought together for the exam. An electrocardiographic subsystem records surface ECG waveforms; an X-ray angiographic subsystem records motion images; a hemodynamic subsystem records intracardiac pressures from a sensor on the catheter. These subsystems send their acquired data by network to a repository. These data are assembled at an analytic workstation by retrieving from the repository. For a left ventriculographic procedure, the ECG is used by the physician to determine the time of maximum and minimum ventricular fill, and when coordinated with the angiographic images, an accurate estimate of the ejection fraction can be calculated. For a valvuloplasty procedure, the hemodynamic waveforms are used to calculate the pre-intervention and post-intervention pressure gradients.
Case 2: Electrophysiology Laboratory - An electrophysiological exam will capture waveforms from multiple sensors on a catheter; the placement of the catheter in the heart is captured on an angiographic image. At an analytic workstation, the exact location of the sensors can thus be aligned with a model of the heart, and the relative timing of the arrival of the electrophysiological waves at different cardiac locations can be mapped.
Case 3: Stress Exam - A stress exam may involve the acquisition of both ECG waveforms and echocardiographic ultrasound images from portable equipment at different stages of the test. The waveforms and the echocardiograms are output on an interchange disk, which is then input and read at a review station. The physician analyzes both types of data to make a diagnosis of cardiac health.
Synchronization of acquisition across multiple modalities in a single study (e.g., angiography and electrocardiography) requires either a shared trigger, or a shared clock. A Synchronization Module within the Frame of Reference Information Entity specifies the synchronization mechanism. A common temporal environment used by multiple equipment is identified by a shared Synchronization Frame of Reference UID. How this UID is determined and distributed to the participating equipment is outside the scope of the Standard.
The method used for time synchronization of equipment clocks is implementation or site specific, and therefore outside the scope of this proposal. If required, standard time distribution protocols are available (e.g., NTP, IRIG, GPS).
An informative description of time distribution methods can be found at: http://web.archive.org/web/20001001065227/http://www.bancomm.com/cntpApp.htm
A second method of synchronizing acquisitions is to utilize a common reference channel (temporal fiducial), which is recorded in the data acquired from the several equipment units participating in a study, and/or that is used to trigger synchronized data acquisitions. For instance, the "X-ray on" pulse train that triggers the acquisition of frames for an X-ray angiographic SOP Instance can be recorded as a waveform channel in a simultaneously acquired hemodynamic waveform SOP Instance, and can be used to align the different object instances. Associated with this Supplement are proposed coded entry channel identifiers to specifically support this synchronization mechanism (DICOM Terminology Mapping Resource Context Group ID 3090).
Figure C.4-1 shows a canonical model of waveform data acquisition. A patient is the subject of the study. There may be several sensors placed at different locations on or in the patient, and waveforms are measurements of some physical quality (metric) by those sensors (e.g., electrical voltage, pressure, gas concentration, or sound). The sensor is typically connected to an amplifier and filter, and its output is sampled at constant time intervals and digitized. In most cases, several signal channels are acquired synchronously. The measured signal usually originates in the anatomy of the patient, but an important special case is a signal that originates in the equipment, either as a stimulus, such as a cardiac pacing signal, as a therapy, such as a radio frequency signal used for ablation, or as a synchronization signal.
The part of the composite information object that carries the waveform data is the Waveform Information Entity (IE). The Waveform IE includes the technical parameters of waveform acquisition and the waveform samples.
The information model, or internal organizational structure, of the Waveform IE is shown in Figure C.5-1. A waveform information object includes data from a continuous time period during which signals were acquired. The object may contain several multiplex groups, each defined by digitization with the same clock whose frequency is defined for the group. Within each multiplex group there will be one or more channels, each with a full technical definition. Finally, each channel has its set of digital waveform samples.
This Waveform IE definition is harmonized with the HL7 waveform semantic constructs, including the channel definition Attributes and the use of multiplex groups for synchronously acquired channels. The use of a common object model allows straightforward transcoding and interoperation between systems that use DICOM for waveform interchange and those that use HL7, and may be viewed as an example of common semantics implemented in the differing syntaxes of two messaging systems.
This section describes the congruence between the DICOM Waveform IE and the HL7 version 2.3 waveform message format (see HL7 version 2.3 Chapter 7, sections 7.14 - 7.20).
Waveforms in HL7 messages are sent in a set of OBX (Observation) Segments. Four subtypes of OBX segments are defined:
The CHN subtype defines one channel in a CD (Channel Definition) Data Type
The TIM subtype defines the start time of the waveform data in a TS (Time String) Data Type
The WAV subtype carries the waveform data in an NA (Numeric Array) or MA (Multiplexed Array) Data Type (ASCII encoded samples, character delimited)
The ANO subtype carries an annotation in a CE (Coded Entry) Data Type with a reference to a specific time within the waveform to which the annotation applies
Other segments of the HL7 message definition specify patient and study identification, whose harmonization with DICOM constructs is not defined in this Annex.
The Waveform Module Channel Definition sequence Attribute (003A,0200) is defined in harmonization with the HL7 Channel Definition (CD) Data Type, in accordance with the following Table. Each Item in the Channel Definition sequence Attribute corresponds to an OBX Segment of subtype CHN.
Table C.6-1. Correspondence Between DICOM and HL7 Channel Definition
In the DICOM information object definition, the sampling frequency is defined for the multiplex group, while in HL7 it is defined for each channel, but is required to be identical for all multiplexed channels.
Note that in the HL7 syntax, Waveform Source is a string, rather than a coded entry as used in DICOM. This should be considered in any transcoding between the two formats.
In HL7, the exact start time for waveform data is sent in an OBX Segment of subtype TIM. The corresponding DICOM Attributes, which must be combined to form the equivalent time string, are:
The DICOM binary encoding of data samples in the Waveform Data (5400,1010) corresponds to the ASCII representation of data samples in the HL7 OBX Segment of subtype WAV. The same channel-interleaved multiplexing used in the HL7 MA (Multiplexed Array) Data Type is used in the DICOM Waveform Data Attribute.
Because of its binary representation, DICOM uses several data elements to specify the precise encoding, as listed in the following Table. There are no corresponding HL7 data elements, since HL7 uses explicit character-delimited ASCII encoding of data samples.
In HL7, Waveform Annotation is sent in an OBX Segment of subtype ANO, using the CE (Coded Entry) Data Type CE. This corresponds precisely to the DICOM Annotation using Coded Entry Sequences. However, HL7 annotation ROI is to a single point only (time reference), while DICOM allows reference to ranges of samples delimited by time or by explicit sample position.
The SCP-ECG standard is designed for recording routine resting electrocardiograms. Such ECGs are reviewed prior to cardiac imaging procedures, and a typical use case would be for SCP-ECG waveforms to be translated to DICOM for inclusion with the full cardiac imaging patient record.
SCP-ECG provides for either simultaneous or non-simultaneous recording of the channels, but does not provide a multiplexed data format (each channel is separately encoded). When translating to DICOM, each subset of simultaneously recorded channels may be encoded in a Waveform Sequence Item (multiplex group), and the delay to the recording of each multiplex group shall be encoded in the Multiplex Group Time Offset (0018,1068).
The electrode configuration of SCP-ECG Section 1 may be translated to the DICOM Acquisition Context (0040,0555) sequence items using TID 3401 “ECG Acquisition Context” and Context Groups 3263 and 3264.
The lead identification of SCP-ECG Section 3, a term coded as an unsigned integer, may be translated to the DICOM Waveform Channel Source (003A,0208) coded sequence using CID 3001 “ECG Lead”.
Pacemaker spike records of SCP-ECG Section 7 may be translated to items in the Waveform Annotations Sequence (0040,B020) with a code term from CID 3335 “ECG Annotation”. The annotation sequence item may record the spike amplitude in its Numeric Value and Measurement Units Attributes.
This Annex was formerly located in Annex K “SR Encoding Example (Retired)” in PS3.3 in the 2003 and earlier revisions of the Standard.
The following is a simple and non-comprehensive illustration of the encoding of the Informative SR Content Tree Example in PS3.3.
This Annex was formerly located in Annex L “Mammography CAD (Retired)” in PS3.3 in the 2003 and earlier revisions of the Standard.
The templates for the Mammography CAD SR IOD are defined in Mammography CAD SR IOD Templates in PS3.16 . Relationships defined in the Mammography CAD SR IOD templates are by-value, unless otherwise stated. Content Items referenced from another SR object instance, such as a prior Mammography CAD SR, are inserted by-value in the new SR object instance, with appropriate original source observation context. It is necessary to update Rendering Intent, and referenced Content Item identifiers for by-reference relationships, within Content Items paraphrased from another source.
The Document Root, Image Library, Summaries of Detections and Analyses, and CAD Processing and Findings Summary sub-trees together form the Content Tree of the Mammography CAD SR IOD. There are no constraints regarding the 1-n multiplicity of the Individual Impression/Recommendation or its underlying structure, other than the TID 4001 “Mammography CAD Overall Impression/Recommendation” and TID 4003 “Mammography CAD Individual Impression/Recommendation” requirements in PS3.16. Individual Impression/Recommendation containers may be organized, for example per image, per finding or composite feature, or some combination thereof.
The Summary of Detections and Summary of Analyses sub-trees identify the algorithms used and the work done by the CAD device, and whether or not each process was performed on one or more entire images or selected regions of images. The findings of the detections and analyses are not encoded in the summary sub-trees, but rather in the CAD Processing and Findings Summary sub-tree. CAD processing may produce no findings, in which case the sub-trees of the CAD Processing and Findings Summary sub-tree are incompletely populated. This occurs in the following situations:
If the tree contains no Individual Impression/Recommendation nodes and all attempted detections and analyses succeeded then the mammography CAD device made no findings.
Detections and Analyses that are not attempted are not listed in the Summary of Detections and Summary of Analyses trees.
If the code value of the Summary of Detections or Summary of Analyses codes in TID 4000 “Mammography CAD Document Root” is "Not Attempted" then no detail is provided as to which algorithms were not attempted.
Figure E.1-3. Example of Individual Impression/Recommendation Levels of Mammography CAD SR Content Tree
The shaded area in Figure E.1-3 demarcates information resulting from Detection, whereas the unshaded area is information resulting from Analysis. This distinction is used in determining whether to place algorithm identification information in the Summary of Detections or Summary of Analyses sub-trees.
The clustering of calcifications within a single image is considered to be a Detection process that results in a Single Image Finding. The spatial correlation of a calcification cluster in two views, resulting in a Composite Feature, is considered Analysis. The clustering of calcifications in a single image is the only circumstance in which a Single Image Finding can result from the combination of other Single Image Findings, which must be Individual Calcifications.
Once a Single Image Finding or Composite Feature has been instantiated, it may be referenced by any number of Composite Features higher in the tree.
Any Content Item in the Content Tree that has been inserted (i.e., duplicated) from another SR object instance has a HAS OBS CONTEXT relationship to one or more Content Items that describe the context of the SR object instance from which it originated. This mechanism may be used to combine reports (e.g., Mammography CAD 1, Mammography CAD 2, Human).
By-reference relationships within Single Image Findings and Composite Features paraphrased from prior Mammography CAD SR objects need to be updated to properly reference Image Library Entries carried from the prior object to their new positions in the present object.
The Impression/Recommendation section of the SR Document Content Tree of a Mammography CAD SR IOD may contain a mixture of current and prior single image findings and composite features. The Content Items from current and prior contexts are target Content Items that have a by-value INFERRED FROM relationship to a Composite Feature Content Item. Content Items that come from a context other than the Initial Observation Context have a HAS OBS CONTEXT relationship to target Content Items that describe the context of the source document.
In Figure E.2-1, Composite Feature and Single Image Finding are current, and Single Image Finding (from Prior) is duplicated from a prior document.
The following is a simple and non-comprehensive illustration of an encoding of the Mammography CAD SR IOD for Mammography computer aided detection results. For brevity, some Mandatory Content Items are not included, such as several acquisition context Content Items for the images in the Image Library.
A mammography CAD device processes a typical screening mammography case, i.e., there are four films and no cancer. Mammography CAD runs both density and calcification detection successfully and finds nothing. The mammograms resemble:
The Content Tree structure would resemble:
A mammography CAD device processes a screening mammography case with four films and a mass in the left breast. Mammography CAD runs both density and calcification detection successfully. It finds two densities in the LCC, one density in the LMLO, a cluster of two calcifications in the RCC and a cluster of 20 calcifications in the RMLO. It performs two clustering algorithms. One identifies individual calcifications and then clusters them, and the second simply detects calcification clusters. It performs mass correlation and combines one of the LCC densities and the LMLO density into a mass; the other LCC density is flagged Not for Presentation, therefore not intended for display to the end-user. The mammograms resemble:
The Content Tree structure in this example is complex. Structural illustrations of portions of the Content Tree are placed within the Content Tree table to show the relationships of data within the tree. Some Content Items are duplicated (and shown in boldface) to facilitate use of the diagrams.
The patient in Example 2 returns for another mammogram. A more comprehensive mammography CAD device processes the current mammogram; analyses are performed that determine some Content Items for Overall and Individual Impression/Recommendations. Portions of the prior mammography CAD report (Example 2) are incorporated into this report. In the current mammogram the number of calcifications in the RCC has increased, and the size of the mass in the left breast has increased from 1 to 4 cm2.
Italicized entries (xxx) in the following table denote references to or by-value inclusion of Content Tree items reused from the prior Mammography CAD SR instance (Example 2).
While the Image Library contains references to Content Tree items reused from the prior Mammography CAD SR instance, the images are actually used in the mammography CAD analysis and are therefore not italicized as indicated above.
Included content from prior mammography CAD report (see Example 2, starting with node 1.2.1.2)
Included content from prior mammography CAD report (see Example 2, starting with node 1.2.4.2)
Computer-aided detection algorithms often compute an internal "CAD score" for each Single Image Finding detected by the algorithm. In some implementations the algorithms then group the findings into "bins" as a function of their CAD score. The number of bins is a function of the algorithm and the manufacturer's implementation, and must be one or more. The bins allow an application that is displaying CAD marks to provide a number of operating points on the Free-response Receiver-Operating Characteristic (FROC) curve for the algorithm, as illustrated in Figure E.4-1.
This is accomplished by displaying all CAD marks of Rendering Intent "Presentation Required" or "Presentation Optional" according to the following rules:
if the display application's Operating Point is 0, only marks with a Rendering Intent = "Presentation Required" are displayed
if the display application's Operating Point is 1, then marks with a Rendering Intent = "Presentation Required" and marks with a Rendering Intent = "Presentation Optional" with a CAD Operating Point = 1 are displayed
if the display application's Operating Point is n, then marks with a Rendering Intent = "Presentation Required" and marks with a Rendering Intent = "Presentation Optional" with a CAD Operating Point <= n are displayed
If a Mammography CAD SR Instance references Digital Mammography X-ray Image Storage - For Processing Instances, but a review workstation has access only to Digital Mammography X-Ray Image Storage - For Presentation Instances, the following steps are recommended in order to display such Mammography CAD SR content with Digital Mammography X-Ray Image - For Presentation Instances.
In most scenarios, the Mammography CAD SR Instance is assigned to the same DICOM Patient and Study as the corresponding Digital Mammography "For Processing" and "For Presentation" image Instances.
If a workstation has a Mammography CAD SR Instance, but does not have images for the same DICOM Patient and Study, the workstation may use the Patient and Study Attributes of the Mammography CAD SR Instance in order to Query/Retrieve the Digital Mammography "For Presentation" images for that Patient and Study.
Once a workstation has the Mammography CAD SR Instance and Digital Mammography "For Presentation" image Instances for the Patient and Study, the Source Image Sequence (0008,2112) Attribute of each Digital Mammography "For Presentation" Instance will reference the corresponding Digital Mammography "For Processing" Instance. The workstation can match the referenced Digital Mammography "For Processing" Instance to a Digital Mammography "For Processing" Instance referenced in the Mammography CAD SR.
The workstation should check for Spatial Locations Preserved (0028,135A) in the Source Image Sequence of each Digital Mammography "For Presentation" image Instance, to determine whether it is spatially equivalent to the corresponding Digital Mammography "For Processing" image Instance.
If the value of Spatial Locations Preserved (0028,135A) is YES, then the CAD results should be displayed.
If the value of Spatial Locations Preserved (0028,135A) is NO, then the CAD results should not be displayed.
If Spatial Locations Preserved (0028,135A) is not present, whether or not the images are spatially equivalent is not known. If the workstation chooses to proceed with attempting to display CAD results, then compare the Image Library (see TID 4020 “CAD Image Library Entry”) Content Item values of the Mammography CAD SR Instance to the associated Attribute values in the corresponding Digital Mammography "For Presentation" image Instance. The Content Items (111044, DCM, "Patient Orientation Row"), (111043, DCM, "Patient Orientation Column"), (111026, DCM, "Horizontal Pixel Spacing"), and (111066, DCM, "Vertical Pixel Spacing") may be used for this purpose. If the values do not match, the workstation needs to adjust the coordinates of the findings in the Mammography CAD SR content to match the spatial characteristics of the Digital Mammography "For Presentation" image Instance.
This Annex was formerly located in Annex M “Chest CAD (Retired)” in PS3.3 in the 2003 and earlier revisions of the Standard.
The templates for the Chest CAD SR IOD are defined in Annex A “Structured Reporting Templates (Normative)” in PS3.16. Relationships defined in the Chest CAD SR IOD templates are by-value, unless otherwise stated. Content Items referenced from another SR object instance, such as a prior Chest CAD SR, are inserted by-value in the new SR object instance, with appropriate original source observation context. It is necessary to update Rendering Intent, and referenced Content Item identifiers for by-reference relationships, within Content Items paraphrased from another source.
The Document Root, Image Library, CAD Processing and Findings Summary, and Summaries of Detections and Analyses sub-trees together form the Content Tree of the Chest CAD SR IOD. See Annex E for additional explanation of the Summaries of Detections and Analyses sub-trees.
The shaded area in Figure F.1-2 demarcates information resulting from Detection, whereas the unshaded area is information resulting from Analysis. This distinction is used in determining whether to place algorithm identification information in the Summary of Detections or Summary of Analyses sub-trees.
The identification of a lung nodule within a single image is considered to be a Detection, which results in a Single Image Finding. The temporal correlation of a lung nodule in two instances of the same view taken at different times, resulting in a Composite Feature, is considered Analysis.
Once a Single Image Finding or Composite Feature has been instantiated, it may be referenced by any number of Composite Features higher in the CAD Processing and Findings Summary sub-tree.
Any Content Item in the Content Tree that has been inserted (i.e., duplicated) from another SR object instance has a HAS OBS CONTEXT relationship to one or more Content Items that describe the context of the SR object instance from which it originated. This mechanism may be used to combine reports (e.g., Chest CAD SR 1, Chest CAD SR 2, Human).
By-reference relationships within Single Image Findings and Composite Features paraphrased from prior Chest CAD SR objects need to be updated to properly reference Image Library Entries carried from the prior object to their new positions in the present object.
The CAD Processing and Findings Summary section of the SR Document Content Tree of a Chest CAD SR IOD may contain a mixture of current and prior single image findings and composite features. The Content Items from current and prior contexts are target Content Items that have a by-value INFERRED FROM relationship to a Composite Feature Content Item. Content Items that come from a context other than the Initial Observation Context have a HAS OBS CONTEXT relationship to target Content Items that describe the context of the source document.
In Figure F.2-1, Composite Feature and Single Image Finding are current, and Single Image Finding (from Prior) is duplicated from a prior document.
The following is a simple and non-comprehensive illustration of an encoding of the Chest CAD SR IOD for chest computer aided detection results. For brevity, some mandatory Content Items are not included, such as several acquisition context Content Items for the images in the Image Library.
A chest CAD device processes a typical screening chest case, i.e., there is one image and no nodule findings. Chest CAD runs lung nodule detection successfully and finds nothing.
The chest radiograph resembles:
The Content Tree structure would resemble:
A chest CAD device processes a screening chest case with one image, and a lung nodule detected. The chest radiograph resembles:
The Content Tree structure in this example is complex. Structural illustrations of portions of the Content Tree are placed within the Content Tree table to show the relationships of data within the tree. Some Content Items are duplicated (and shown in boldface) to facilitate use of the diagrams.
The Content Tree structure would resemble:
The patient in Example 2 returns for another chest radiograph. A more comprehensive chest CAD device processes the current chest radiograph, and analyses are performed that determine some temporally related Content Items for Composite Features. Portions of the prior chest CAD report (Example 2) are incorporated into this report. In the current chest radiograph the lung nodule has increased in size.
Italicized entries (xxx) in the following table denote references to or by-value inclusion of Content Tree items reused from the prior Chest CAD SR instance (Example 2).
While the Image Library contains references to Content Tree items reused from the prior Chest CAD SR instance, the images are actually used in the chest CAD analysis and are therefore not italicized as indicated above.
The CAD processing and findings consist of one composite feature, comprised of single image findings, one from each year. The temporal relationship allows a quantitative temporal difference to be calculated:
The patient in Example 3 is called back for CT to confirm the Lung Nodule found in Example 3. The patient undergoes CT of the Thorax and the initial chest radiograph and CT slices are sent to a more comprehensive CAD device for processing. Findings are detected and analyses are performed that correlate findings from the two collections of data. Portions of the prior CAD report (Example 3) are incorporated into this report.
Italicized entries (xxx) in the following table denote references to or by-value inclusion of Content Tree items reused from the prior Chest CAD SR instance (Example 3).
While the Image Library contains references to Content Tree items reused from the prior Chest CAD SR instance, the images are actually used in the CAD analysis and are therefore not italicized as indicated above.
Most recent examination content:
This Annex was formerly located in Annex N “Explanation of Grouping Criteria for Multi-frame Functional Group IODs (Retired)” in PS3.3 in the 2003 and earlier revisions of the Standard.
When considering how to group an Attribute, one needs to consider first of all whether or not the values of an Attribute are different per frame. The reasons to consider whether to allow an Attribute to change include:
The more Attributes that change, the more parsing a receiving application has to do in order to determine if the multi-frame object has frames the application should deal with. The more choices, the more complex the application becomes, potentially resulting in interoperability problems.
The frequency of change of an Attribute must also be considered. If an Attribute could be changed every frame then obviously it is not a very good candidate for making it fixed, since this would result in a multi-frame size of 1.
The number of applications that depend on frame level Attribute grouping is another consideration. For example, one might imagine a pulse sequence being changed in a real-time acquisition, but the vast majority of acquisitions would leave this constant. Therefore, it was judged not too large a burden to force an acquisition device to start a new object when this happens. Obviously, this is a somewhat subjective decision, and one should take a close look at the Attributes that are required to be fixed in this document.
The Attributes from the image pixel module must not change in a multi-frame object due to legacy tool kits and implementations.
The potential frequency of change is dependent on the applications both now and likely during the life of this Standard. The penalty for failure to allow an Attribute to change is rather high since it will be hard/impossible to change later. Making an Attribute variable that is static is more complex and could result in more header space usage depending on how it is grouped. Thus there is a trade-off of complexity and potentially header size with not being able to take advantage of the multi-frame organization for an application that requires changes per frame.
Once it is decided which Attributes should be changed within a multi-frame object then one needs to consider the criteria for grouping Attributes together:
Groupings should be designed so those Attributes that are likely to vary together should be in the same sequence. The goal is to avoid the case where Attributes that are mostly static have to be included in a sequence that is repeated for every frame.
Care should be taken so that we define a manageable number of grouping sequences. Too few sequences could result in many static Attributes being repeated for each frame, when some other element in their sequence was varying, and too many sequences becomes unwieldy.
The groupings should be designed such that modality independent Attributes are kept separate from those that are MR specific. This will presumably allow future working groups to reuse the more general groupings. It also should allow software that operates on multi-frame objects from multiple objects maximize code reuse.
Grouping related Attributes together could convey some semantics of the overall contents of the multi-frame object to receiving applications. For instance, if a volumetric application finds the Plane Orientation Macro present in the Per-Frame Functional Groups Sequence, it may decide to reject the object as inappropriate for volumetric calculations.
Specific notes on Attribute grouping:
Attributes not allowed to change: Image Pixel Module (due to legacy toolkit concerns); and Pulse Sequence Module Attributes (normally do not change except in real-time - it is expected real time applications can handle the complexity and speed of starting new IODs when pulse sequence changes).
Sequences not starting with the word "MR" could be applied to more modalities than just MR.
All Attributes that must be in a frame header were placed in the Frame Content Macro.
Position and orientation are in separate sequences since they are changed independently.
For real-time sequences there are contrast mechanisms that can be applied to base pulse sequences and are turned on and off by the operator depending on the anatomy being imaged and the time/contrast trade-off associated with these. Such modifiers include: IR, flow compensation, spoiled, MT, and T2 preparation… These probably are not changed in non-real-time scans. These are all kept in the MR Modifier Macro.
"Number of Averages" Attributes is in its own sequence because real-time applications may start a new averaging process every time a slice position/orientation changes. Each subsequent frame will average with the preceding N frames where N is chosen based on motion and time. Each frame collected at a particular position/orientation will have a different number of averages, but all other Attributes are likely to remain the same. This particular application drives this Attribute being in its own group.
This Annex was formerly located in Annex O “Clinical Trial Identification Workflow Examples (Retired)” in PS3.3 in the 2003 and earlier revisions of the Standard.
The Clinical Trial Identification modules are optional. As such, there are several points in the workflow of clinical trial or research data at which the Clinical Trial Identification Attributes may be added to the data. At the Clinical Trial Site, the Attributes may be added at the scanner, a PACS system, a site workstation, or a workstation provided to the site by a Clinical Trial Coordinating Center. If not added at the site, the Clinical Trial Identification Attributes may be added to the data after receipt by the Clinical Trial Coordinating Center. The addition of clinical trial Attributes does not itself require changes to the SOP Instance UID. However, the clinical trial or research protocol or the process of de-identification may require such a change.
Images are obtained for the purpose of comparing patients treated with placebo or the drug under test, then evaluated in a blinded manner by a team of radiologists at the Clinical Trial Coordinating Center (CTCC). The images are obtained at the clinical sites, collected by the CTCC, at which time their identifying Attributes are removed and the Clinical Trial Identification (CTI) module is added. The de-identified images with the CTI information are then presented to the radiologists who make quantitative and/or qualitative assessments. The assessments, and in some cases the images, are returned to the sponsor for analysis, and later are contributed to the submission to the regulating authority.
The templates for ultrasound reports are defined in Annex A “Structured Reporting Templates (Normative)” in PS3.16. Figure I.1-1 is an outline of the common elements of ultrasound structured reports.
The Patient Characteristics Section is for medical data of immediate relevance to administering the procedure and interpreting the results. This information may originate outside the procedure.
The Procedure Summary Section contains exam observations of immediate or primary significance. This is key information a physician typically expects to see first in the report.
Measurements typically reside in a measurement group container within a Section. Measurement groups share context such as anatomical location, protocol or type of analysis. The grouping may be specific to a product implementation or even to a user configuration. OB-GYN measurement groups have related measurements, averages and other derived results.
If present, the Image Library contains a list of images from which observations were derived. These are referenced from the observations with by-reference relationships.
The Procedure Summary Section contains the observations of most immediate interest. Observations in the procedure summary may have by-reference relationships to other Content Items.
Where multiple fetuses exist, the observations specific to each fetus must reside under separate section headings. The section heading must specify the fetus observation context and designate so using Subject ID (121030, DCM, "Subject ID") and/or numerical designation (121037, DCM, "Fetus Number") as shown below. See TID 1008 “Subject Context, Fetus”.
Reports may specify dependencies of a calculation on its dependent observations using by-reference relationships. This relationship must be present for the report reader to know the inputs of the derived value.
Optionally, the relationship of an observation to its image and image coordinates can be encoded with by-reference Content Items as Figure I.5-1 shows. For conciseness, the by-reference relationship points to the Content Item in the Image Library, rather than directly to the image.
R-INFERRED FROM relationships to IMAGE Content Items specify that the image supports the observation. A purpose of reference in an SCOORD Content Item may specify an analytic operation (performed on that image) that supports or produces the observation.
A common OB-GYN pattern is that of several instances of one measurement type (e.g., BPD), the calculated average of those values, and derived values such as a gestational age calculated according to an equation or table. The measurements and calculations are all siblings in the measurement group. A child Content Item specifies the equation or table used to calculate the gestational age. All measurement types must relate to the same biometric type. For example, it is not allowed to mix a BPD and a Nuchal Fold Thickness measurement in the same biometry group.
The example above is a gestational age calculated from the measured value. The relationship is to an equation or table. The inferred from relationship identifies equation or table in the Concept Name. Codes from CID 12013 “Gestational Age Equation/Table” identify the specific equation or table.
Another use case is the calculation of a growth parameter's relationship to that of a referenced distribution and a known or assumed gestational age. CID 12015 “Fetal Growth Equation/Table” identify the growth table. Figure I.6-2 shows the assignment of a percentile for the measured BPD, against the growth of a referenced population. The dependency relationship to the gestational age is a by-reference relationship to the established gestational age. Though the percentile rank is derived from the BPD measurement, a by-reference relationship is not essential if one BPD has a concept modifier indicating that it is the mean or has selection status (see TID 300 “Measurement”). A variation of this pattern is the use of Z-score instead of percentile rank. Not shown is the expression of the normal distribution mean, standard deviation, or confidence limits.
Estimated fetal weight (EFW) is a fetus summary item as shown below. It is calculated from one or more growth parameters (the inferred from relationships are not shown). TID 315 “Equation or Table” allows specifying how the value was derived. Terms from CID 12014 “OB Fetal Body Weight Equation/Table” specify the table or equation that yields the EFW from growth parameters.
"EFW percentile rank" is another summary term. By definition, this term depends upon the EFW and the population distribution of the ranking. A Reference Authority Content Item identifies the distribution. CID 12016 “Estimated Fetal Weight Percentile Equation/Table” is list of established reference authorities.
When multiple observations of the same type exist, one of these may be the selected value. Typically, this value is the average of the others, or it may be the last entered, or user chosen. TID 310 “Measurement Properties” provides a Content Item with concept name of (121404, DCM, "Selection Status") and a value set specified by DCID 224 “Selection Method”.
There are multiple ways that a measurement may originate. The measurement value may result as an output of an image interactive, system tool. Alternatively, the user may directly enter the value, or the system may create a value automatically as the mean of multiple measurement instances. TID 300 “Measurement” provides that a concept modifier of the numeric Content Item specify the derivation of the measurement. The concept name of the modifier is (121401, DCM, "Derivation"). CID 3627 “Measurement Type” provides concepts of appropriate measurement modifiers. Figure I.7-2 illustrates such a case.
The following are simple, non-comprehensive illustrations of report sections.
The following example shows the highest level of Content Items for a second or third trimester OB exam. Subsequent examples show details of section content,
The following example shows the highest level of Content Items for a GYN exam. Subsequent examples show details of section content.
Optionally, but not shown, the ratios may have by-reference, inferred-from relationships to the Content Items holding the numerator and denominator values.
This example shows measurements and estimated gestational age.
This example shows measurements and with percentile ranking.
The content structure in the example below conforms to TID 5012 “Ovaries Section”. The example shows the volume derived from three perpendicular diameters.
The content structure in the example below conforms to TID 5013 “Follicles Section”. It uses multiple measurements and derived averages for each of the perpendicular diameters.
This Annex was formerly located in Annex M “Handling of Identifying Parameters (Informative)” in PS3.4 in the 2003 and earlier revisions of the Standard.
The DICOM Standard was published in 1993 and addresses medical images communication between medical modalities, workstations and other medical devices as well as data exchange between medical devices and the Information System (IS). DICOM defines SOP Instances with Patient, Visit and Study information managed by the Information System and allows to communicate the Attribute values of these objects.
Since the publication of the DICOM Standard great effort has been made to harmonize the Information Model of the DICOM Standard with the models of other relevant standards, especially with the HL7 model and the CEN TC 251 WG3 PT 022 model. The result of these effort is a better understanding of the various practical situations in hospitals and an adaptation of the model to these situations. In the discussion of models, the definition of Information Entities and their Identifying Parameters play a very important role.
The purpose of this Informative Annex is to show which identifying parameters may be included in Image SOP Instances and their related Modality Performed Procedure Step (MPPS) SOP Instance. Different scenarios are elucidated to describe varying levels of integration of the Modality with the Information System, as well as situations in which a connection is temporarily unavailable.
In this Annex, "Image SOP Instance" is used as a collective term for all Composite Image Storage SOP Instances.
The scenarios described here are informative and do not constitute a normative section of the DICOM Standard.
"Integrated" means in this context that the Acquisition Modality is connected to an Information System or Systems that may be an SCP of the Modality Worklist SOP Class or an SCP of the Modality Performed Procedure Step SOP Class or both. In the following description only the behavior of "Modalities" is mentioned, it goes without saying that the IS must conform to the same SOP Classes.
The Modality receives identifying parameters by querying the Modality Worklist SCP and generates other Attribute values during image generation. It is desirable that these identifying parameters be included in the Image SOP Instances as well as in the MPPS object in a consistent manner. In the case of a Modality that is integrated but unable to receive or send identifying parameters, e.g., link down, emergency case, the Modality may behave as if it were not integrated.
The Study Instance UID is a crucial Attribute that is used to relate Image SOP Instances (whose Study is identified by their Study Instance UID), the Modality PPS SOP Instance that contains it as a reference, and the actual or conceptual Requested Procedure (i.e., Study) and related Imaging Service Request in the IS. An IS that manages an actual or conceptual Detached Study Management entity is expected to be able to relate this Study Instance UID to the SOP Instance UID of the Detached Study Management SOP Instance, whether or not the Study Instance UID is provided by the IS or generated by the modality.
For a detailed description of an integrated environment see the IHE Radiology Technical Framework. This document can be obtained at http://www.ihe.net/
N-CREATE a MPPS SOP Instance and include its SOP Instance UID in the Image SOP Instances within the Referenced Performed Procedure Step Sequence Attribute.
Copy the following Attribute values from the Modality Worklist information into the Image SOP Instances and into the related MPPS SOP Instance:
Create the following Attribute value and include it into the Image SOP Instances and the related MPPS SOP Instance:
Include the following Attribute values that may be generated during image acquisition, if supported, into the Image SOP Instances and the related MPPS SOP Instance:
In the absence of the ability to N-CREATE a MPPS SOP Instance, generate a MPPS SOP Instance UID and include it into the Referenced Performed Procedure Step Sequence Attribute of the Image SOP Instances. A system that later N-CREATEs a MPPS SOP Instance may use this UID extracted from the related Image SOP Instances.
Copy the following Attribute values from the Modality Worklist information into the Image SOP Instances:
Create the following Attribute value and include it into the Image SOP Instances:
A system that later N-CREATEs a MPPS SOP Instance may use this Attribute value extracted from the related Image SOP Instances.
A system that later N-CREATEs a MPPS SOP Instance may use these Attribute values extracted from the related Image SOP Instances.
N-CREATE a MPPS SOP Instance and include its SOP Instance UID in the Image SOP Instances within the Referenced Performed Procedure Step Sequence Attribute.
Create the following Attribute values and include them in the Image SOP Instances and the related MPPS SOP Instance:
Copy the following Attribute values, if available to the Modality, into the Image SOP Instances and into the related MPPS SOP Instance:
If sufficient identifying information is included, it will allow the Image SOP Instances and the MPPS SOP Instance to be later related to the Requested Procedure and the actual or conceptual Detached Study Management entity.
"Non-Integrated" means in this context that the Acquisition Modality is not connected to an Information System Systems, does not receive Attribute values from an SCP of the Modality Worklist SOP Class, and cannot create a Performed Procedure Step SOP Instance.
In the absence of the ability to N-CREATE a MPPS SOP Instance, generate a MPPS SOP Instance UID and include it into the Referenced Performed Procedure Step Sequence Attribute of the Image SOP Instances. A system that later N-CREATEs a MPPS SOP Instance may use this UID extracted from the related Image SOP Instances.
Create the following Attribute values and include them in the Image SOP Instances:
A system that later N-CREATEs a MPPS SOP Instance may use these Attribute values extracted from the related Image SOP Instances.
If sufficient identifying information is be included, it will allow the Image SOP Instances to be later related to the Requested Procedure and the actual or conceptual Detached Study Management entity.
A system that later N-CREATEs a MPPS SOP Instance may use these Attribute values extracted from the related Image SOP Instances.
In the MPPS SOP Instance, all the specific Attributes of a Scheduled Procedure Step or Steps are included in the Scheduled Step Attributes Sequence. In the Image SOP Instances, these Attributes may be included in the Request Attributes Sequence. This is an optional Sequence in order not to change the definition of existing SOP Classes by adding new required Attributes or changing the meaning of existing Attributes.
Both Sequences may have more than one Item if more than one Requested Procedure results in a single Performed Procedure Step.
Because of the definitions of existing Attributes in existing Image SOP Classes, the following solutions are a compromise. The first one chooses or creates a value for the single valued Attributes Study Instance UID and Accession Number. The second one completely replicates the Image data with different values for the Attributes Study Instance UID and Accession Number.
create a Request Attributes Sequence containing two or more Items each containing the following Attributes:
create a Referenced Study Sequence containing two or more Items sufficient to contain the Study SOP Instance UID values from the Modality Worklist for both Requested Procedures
select one value from the Modality Worklist or generate a new value for:
select one value from the Modality Worklist or generate a new value or assign an empty value for:
An alternative method is to replicate the entire Image SOP Instance with a new SOP Instance UID, and assign each Image IOD it's own identifying Attributes. In this case, each of the Study Instance UID and the Accession Number values can be used in their own Image SOP Instance.
Both Image SOP Instances may reference a single MPPS SOP Instance (via the MPPS SOP Instance UID in the Referenced Performed Procedure Step Sequence).
Each individual Image SOP Instance may reference it's own related Study SOP Instance, if it exists (via the Referenced Study Sequence). This Study SOP Instance has a one to one relationship with the corresponding Requested Procedure.
If an MPPS SOP Instance is created, it may reference both related Study SOP Instances.
For all Series in the MPPS, replicate the entire Series of Images using new Series Instance UIDs
Create replicated Image SOP Instances with different SOP Instance UIDs that use the new Series Instance UIDs, for each of the two or more Requested Procedures
In each of the Image SOP Instances, using values from the corresponding Requested Procedure:
In the MPPS SOP Instance (if supported):
In both the Image SOP Instances and the MPPS SOP Instance (if supported):
If for some reason the Modality was unable to create the MPPS SOP Instance, another system may wish to perform this service. This system must make sure that the created PPS SOP Instance is consistent with the related Image SOP Instances.
Depending on the availability and correctness of values for the Attributes in the Image SOP Instances, these values may be copied into the MPPS SOP Instance, or they may have to be coerced, e.g., if they are not consistent with corresponding values available from the IS.
For example, if the MPPS SOP Instance UID is already available in the Image SOP Instance (in the Referenced Performed Procedure Step Sequence), it may be utilized to N-CREATE the MPPS SOP Instance. If not available, a new MPPS SOP Instance UID may be generated and used to N-CREATE the MPPS SOP Instance. In this case there may be no MPPS SOP Instance UID in the Referenced Performed Procedure Step Sequence in the corresponding Image SOP Instances. An update of the Image SOP Instances will restore the consistency, but this is not required.
The purpose of this annex is to enhance consistency and interoperability among creators and consumers of Ultrasound images within Staged Protocol Exams. An ultrasound "Staged Protocol Exam" is an exam that acquires a set of images under specified conditions during time intervals called "Stages". An example of such an exam is a cardiac stress-echo Staged Protocol.
This informative annex describes the use of ultrasound Staged Protocol Attributes within the following DICOM Services: Ultrasound Image, Ultrasound Multi-frame Image, and Key Object Selection Document Storage, Modality Worklist, and Modality Performed Procedure Step Services.
The support of ultrasound Staged Protocol Data Management requires support for the Ultrasound Image SOP Class or Ultrasound Multi-frame Image SOP Class as appropriate for the nature of the Protocol. By supporting some optional Elements of these SOP Classes, Staged-Protocols can be managed. Support of Key Object Selection allows control of the order of View and Stage presentation. Support of Modality Worklist Management and Modality Performed Procedure Step allow control over specific workflow use cases as described in this Annex.
A "Staged Protocol Exam" acquires images in two or more distinct time intervals called "Stages" with a consistent set of images called "Views" acquired during each Stage of the exam. A View is of a particular cross section of the anatomy acquired with a specific ultrasound transducer position and orientation. During the acquisition of a Staged Protocol Exam, the modality may also acquire non-Protocol images at one or more Protocol Stages.
A common real-world example of an ultrasound Staged Protocol exam is a cardiac stress-echo ultrasound exam. Images are acquired in distinct time intervals (Stages) of different levels of stress and Views as shown in Figure K.3-1. Typically, stress is induced by means of patient exercise or medication. Typical Stages for such an exam are baseline, mid-stress, peak-stress, and recovery. During the baseline Stage the patient is at rest, prior to inducing stress through medication or exercise. At mid-stress Stage the heart is under a moderate level of stress. During peak-stress Stage the patient's heart experiences maximum stress appropriate for the patient's condition. Finally, during the recovery Stage, the heart recovers because the source of stress is absent.
At each Stage an equivalent set of Views is acquired. Examples of typical Views are parasternal long axis and parasternal short axis. Examination of wall motion between the corresponding Views of different Stages may reveal ischemia of one or more regions ("segments") of the myocardium. Figure K.3-1 illustrates the typical results of a cardiac stress-echo ultrasound exam.
The DICOM Standard includes a number of Attributes of significance to Staged Protocol Exams. This Annex explains how scheduling and acquisition systems may use these Attributes to convey Staged Protocol related information.
Table K.4-1 lists all the Attributes relevant to convey Staged Protocol related information (see PS3.3 for details about these Attributes).
Table K.4-1. Attributes That Convey Staged Protocol Related Information
This annex provides guidelines for implementation of the following aspects of Staged Protocol exams:
The Attributes Number of Stages (0008,2124) and Number of Views in Stage (0008,212A) are each Type 2C with the condition "Required if this image was acquired in a Staged Protocol." These two Attributes will be present with values in image SOP Instances if the exam meets the definition of a Staged Protocol Exam stated in Section K.3. This includes both the Protocol View images as well as any extra-Protocol images acquired during the Protocol Stages.
The Attributes Protocol Name (0018,1030) and Performed Protocol Code Sequence (0040,0260) identify the Protocol of a Staged Protocol Exam, but the mere presence of one or both of these Attributes does not in itself identify the acquisition as a Staged Protocol Exam. If both Protocol Name and Performed Protocol Code Sequence Attributes are present, the Protocol Name value takes precedence over the Performed Protocol Code Sequence Code Meaning value as a display label for the Protocol, since the Protocol Name would convey the institutional preference better than the standardized code meaning.
Display devices usually include capabilities that aid in the organization and presentation of images acquired as part of the Staged Protocol. These capabilities allow a clinician to display images of a given View acquired during different Stages of the Protocol side by side for comparison. A View is a particular combination of the transducer position and orientation at the time of image acquisition. Images are acquired at the same View in different Protocol Stages for the purpose of comparison. For these features to work properly, the display device must be able to determine the Stage and View of each image in an unambiguous fashion.
There are three possible mechanisms for conveying Stage and View identification in the image SOP Instances:
"Numbers" (Stage Number (0008,2122) and View Number (0008,2128) ), which number Stages and Views, starting with one.
"Names" (Stage Name (0008,2120) and View Name (0008,2127) ), which specify textual names for each Stage and View, respectively.
"Code sequences" (Stage Code Sequence (0040,000A) for Stage identification, and View Code Sequence (0054,0220) for View identification), which give identification "codes" to the Stage and View respectively.
The use of code sequences to identify Stage and View, using Context Group values specified in PS3.16 (e.g., CID 12002 “Ultrasound Protocol Stage Type” and CID 12226 “Echocardiography Image View”), allows a display application with knowledge of the code semantics to render a display in accordance with clinical domain uses and user preferences (e.g., populating each quadrant of an echocardiographic display with the user desired stage and view). The IHE Echocardiography Workflow Profile requires such use of code sequences for stress-echo studies.
Table K.5-1 provides an example of the Staged Protocol relevant Attributes in images acquired during a typical cardiac stress-echo ultrasound exam.
Table K.5-1. Staged Protocol Image Attributes Example
At any Stage of a Staged Protocol exam, the operator may acquire images that are not part of the Protocol. These images are so-called "extra-Protocol images". Information regarding the performed Protocol is still included because such images are acquired in the same Procedure Step as the Protocol images. The Stage number and optionally other Stage identification Attributes (Stage Name and/or Stage Code Sequence) should still be conveyed in extra-Protocol images. However, the View number should be omitted to signify that the image is not one of the standard Views in the Protocol. Other View identifying information, such as name or code sequences, may indicate the image location.
Table K.5-2. Comparison Of Protocol And Extra-Protocol Image Attributes Example
Ultrasound systems often acquire multiple images at a particular stage and view. If one image is difficult to interpret or does not fully portray the ventricle wall, the physician may choose to view an alternate. In some cases, the user may identify the preferred image. The Key Object Selection Document can identify the preferred image for any or all of the Stage-Views. This specific usage of the Key Object Selection Document has a Document Title of (113013, DCM, "Best In Set") and Document Title Modifier of (113017, DCM, "Stage-View").
Modality Performed Procedure Step (MPPS) is the basic organizational unit of Staged Protocol Exams. It is recommended that a single MPPS instance encompass the entire acquisition of an ultrasound Staged Protocol Exam if possible.
There are no semantics assigned to the use of Series within a Staged Protocol Exam other than the DICOM requirements as to the relationship between Series and Modality Performed Procedure Steps. In particular, all of the following scenarios are possible:
There is no recommendation on the organization of images into Series because clinical events make such recommendations impractical. Figure K.5.5-1 shows a possible sequence of interactions for a protocol performed as a single MPPS.
A special case arises when the acquisition during a Protocol Stage is halted for some reason. For example, such a situation can occur if signs of patient distress are observed, such as angina in a cardiac stress exam. These criteria are part of the normal exam Protocol, and as long as the conditions defined for the Protocol are met the MPPS status is set to COMPLETED. Only if the exam terminates before meeting the minimum acquisition requirements of the selected Protocol would MPPS status be set to DISCONTINUED. It is recommended that the reason for discontinuation should be conveyed in the Modality Procedure Step Discontinuation Reason Code Sequence (0040,0281). Staged Protocols generally include criteria for ending the exam, such as when a target time duration is reached or if signs of patient distress are observed.
If a Protocol Stage is to be acquired at a later time with the intention of using an earlier completed Protocol Stage of a halted Staged Protocol then a new Scheduled Procedure Step may or may not be created for this additional acquisition. Workflow management recommendations vary depending on whether the care institution decides to create a new Scheduled Procedure Step or not.
Follow-up Stages must use View Numbers, Names, and Code Sequences identical to those in the prior Stages to enable automatically correlating images of the original and follow-up Stages.
K.5.5.2.1 Unscheduled Follow-up Stages
Follow-up Stages require a separate MPPS. Since follow-up stages are part of the same Requested Procedure and Scheduled Procedure Step, all acquired image SOP Instances and generated MPPS instances specify the same Study Instance UID. If the Study Instance UID is different, systems will have difficulty associating related images. This creates a significant problem if Modality Worklist is not supported. Therefore systems should assign the same Study Instance UID for follow-up Stages even if Modality Worklist is not supported. Figure K.5.5-2 shows a possible interaction sequence for this scenario.
In some cases a new Scheduled Procedure Step is created to acquire follow-up Stages. For example, a drug induced stress-echo exam may be scheduled because an earlier exercise induced stress-echo exam had to be halted due to patient discomfort. In such cases it would be redundant to reacquire earlier Stages such as the rest Stage of a cardiac stress-echo ultrasound exam. One MPPS contains the Image instances of the original Stage and a separate MPSS contains the follow-up instances.
If Scheduled and Performed Procedure Steps for Staged Protocol Exam data use the same Study Instance UID, workstations can associate images from the original and follow-up Stages. Figure K.5.5-3 shows a possible interaction sequence for this scenario.
The Hemodynamics Report is based on TID 3500 “Hemodynamics Report”. The report contains one or more measurement containers, each corresponding to a phase of the cath procedure. Within each container may be one or more sub-containers, each associated with a single measurement set. A measurement set consists of measurements from a single anatomic location. The resulting hierarchical structure is depicted in Figure L-1.
The container for each phase has an optional subsidiary container for Clinical Context with a parent-child relationship of has-acquisition-context. This Clinical Context container allows the recording of pertinent patient state information that may be essential to understanding the measurements made during that procedure phase. It should be noted that any such patient state information is necessarily only a summary; a more complete clinical picture may be obtained by review of the cath procedure log.
The lowest level containers for the measurement sets are specialized by the class of anatomic location - arterial, venous, atrial, ventricular - for the particular measurements appropriate to that type of location. These containers explicitly identify the anatomic location with a has-acquisition-context relationship. Since such measurement sets are typically measured on the same source (e.g., pressure waveform), the container may also have a has-acquisition-context relationship with a source DICOM waveform SOP Instance.
The "atomic" level of measurements within the measurement set containers includes three types of data. First is the specific measurement data acquired from waveforms related to the site. Second is general measurement data that may include any hemodynamic, patient vital sign, or blood chemistry data. Third, derived data are produced from a combination of other data using a mathematical formula or table, and may provide reference to the equation.
The vascular procedure report partitions numeric measurements into section headings by anatomic region and by laterality. A laterality concept modifier of the section heading concept name specifies whether laterality is left or right. Therefore, laterally paired anatomy sections may appear two times, once for each laterality. Findings of unpaired anatomy, are separately contained in a separate "unilateral" section container. Therefore, in vascular ultrasound, laterality is always expressed at the section heading level with one of three states: left, right, or unilateral (unpaired). There is no provision for anatomy of unknown laterality other than as a TEXT Content Item in the summary.
Note that expressing laterality at the heading level differs from OB-GYN Pelvic and fetal vasculature, which expresses laterality as concept modifiers of the anatomic containers.
The common vascular pattern is a battery of measurements and calculations repeatedly applied to various anatomic locations. The anatomic location is the acquisition context of the measurement group. For example, a measurement group may have a measurement source of Common Iliac Artery with several measurement instances and measurement types such as mean velocity, peak systolic velocity, acceleration time, etc.
There are distinct anatomic concepts to modify the base anatomy concept. The modification expression is a Content Item with a modifier concept name and value selected from a Context Group as the table shows below.
The templates for ultrasound reports are defined in PS3.16. Figure N.1-1 is an outline of the echocardiography report.
The common echocardiography measurement pattern is a group of measurements obtained in the context of a protocol. Figure N.1-2 shows the pattern.
DICOM identifies echocardiography observations with various degrees of pre- and post-coordination. The concept name of the base Content Item typically specifies both anatomy and property for commonly used terms, or purely a property. Pure property concepts require an anatomic site concept modifier. Pure property concepts such as those in CID 12222 “Orifice Flow Property” and CID 12239 “Cardiac Output Property” use concept modifiers shown below.
Further qualification specifies the image mode and the image plane using HAS ACQ CONTEXT with the value sets shown below.
The content of this section provides recommendations on how to express the concepts from draft ASE guidelines with measurement type concept names and concept name modifiers.
The leftmost column is the name of the ASE concept. The Base Measurement Concept Name is the concept name of the numeric measurement Content Item. The modifiers column specifies a set of modifiers for the base measurement concept name. Each modifier consists of a modifier concept name (e.g., method or mode) and its value (e.g., Continuity). Where no Concept Modifier appears, the base concept matches the ASE concept.
Aortic Valve measurements appear in TID 5202 “Echo Section”, which specifies the Finding Site to be Aortic Valve with the concept modifier (363698007, SCT, "Finding Site") = (34202007, SCT, "Aortic Valve").Therefore, the Finding Site modifier does not appear in the right column.
Measurements in the Left Ventricle section have context of Left Ventricle and do not require a Finding Site modifier (363698007, SCT, "Finding Site") = (87878005, SCT, "Left Ventricle") to specify the site. The Finding Site modifier appears for more specificity.
(363698007, SCT, "Finding Site") = (13418002, SCT, "Left Ventricular Outflow Tract") |
||
Left Ventricular Outflow Tract Systolic Cross Sectional Area |
(363698007, SCT, "Finding Site") = (13418002, SCT, "Left Ventricular Outflow Tract") |
|
(363698007, SCT, "Finding Site") = (13418002, SCT, "Left Ventricular Outflow Tract") |
||
Left Ventricular Outflow Tract Systolic Peak Instantaneous Gradient |
(363698007, SCT, "Finding Site") = (13418002, SCT, "Left Ventricular Outflow Tract") |
|
(363698007, SCT, "Finding Site") = (13418002, SCT, "Left Ventricular Outflow Tract") |
||
(363698007, SCT, "Finding Site") = (13418002, SCT, "Left Ventricular Outflow Tract") |
||
Left Ventricular Outflow Tract Systolic Velocity Time Integral |
(363698007, SCT, "Finding Site") = (13418002, SCT, "Left Ventricular Outflow Tract") |
Left Ventricular Mass by 2-D Method of Disks, Single Plane (4-Chamber) |
(399264008, SCT, "Image Mode") = (399064001, SCT, "2D mode") (370129005, SCT, "Measurement Method") = (125208, DCM, "Method Of Disks, single plane") |
|
(399264008, SCT, "Image Mode") = (399064001, SCT, "2D mode") (370129005, SCT, "Measurement Method") = (125207, DCM, "Method of disks, biplane") |
||
Mitral Valve measurements appear in TID 5202 “Echo Section”, which specifies the Finding Site to be Mitral Valve with the concept modifier (363698007, SCT, "Finding Site") = (91134007, SCT, "Mitral Valve").Therefore, the Finding Site modifier does not appear in the right column.
Pulmonic Valve measurements appear in TID 5202 “Echo Section”, which specifies the Finding Site to be Pulmonic Valve with the concept modifier (363698007, SCT, "Finding Site") = (46030003, SCT, "Pulmonic Valve"). Therefore, this Finding Site concept modifier does not appear in the right column.
TRICUSPID Valve measurements appear in TID 5202 “Echo Section”, which specifies the Finding Site to be Tricuspid Valve with the concept modifier (363698007, SCT, "Finding Site") = (46030003, SCT, "Tricuspid Valve"). Therefore, the Finding Site modifier does not appear in the right column.
(29460-3, LN, "Thoracic Aorta Coarctation Systolic Peak Velocity") |
||
Thoracic Aorta Coarctation Systolic Peak Instantaneous Gradient |
(363698007, SCT, "Finding Site") = (253678000, SCT, "Thoracic Aortic Coarctation") |
|
(17995-2, LN, "Thoracic Aorta Coarctation Systolic Peak Instantaneous Gradient") |
||
(363698007, SCT, "Finding Site") = (30288003, SCT, "Ventricular Septal Defect") |
||
Ventricular Septal Defect Systolic Peak Instantaneous Gradient |
(363698007, SCT, "Finding Site") = (30288003, SCT, "Ventricular Septal Defect") |
|
(363698007, SCT, "Finding Site") = (30288003, SCT, "Ventricular Septal Defect") |
||
(363698007, SCT, "Finding Site") = (30288003, SCT, "Ventricular Septal Defect") |
||
(363698007, SCT, "Finding Site") = (70142008, SCT, "Atrial Septal Defect") |
||
Pulmonary-to-Systemic Shunt Flow Ratio by Doppler Volume Flow |
(370129005, SCT, "Measurement Method") = (125219, DCM, "Doppler Volume Flow") |
The IVUS Report contains one or more vessel containers, each corresponding to the vessel (arterial location) being imaged. Each vessel is associated with one or more IVUS image pullbacks (Ultrasound Multi-frame Images), acquired during a phase of a catheterization procedure. Each vessel may contain one or more sub-containers, each associated with a single lesion. Each lesion container includes a set of IVUS measurements and qualitative assessments. The resulting hierarchical structure is depicted in Figure N.5-1.
These SOP Classes allow describing spatial relationships between sets of images. Each instance can describe any number of registrations as shown in Figure O.1-1. It may also reference prior registration instances that contribute to the creation of the registrations in the instance.
A Reference Coordinate System (RCS) is a spatial Frame of Reference described by the DICOM Frame of Reference Module. The chosen Frame of Reference of the Registration SOP Instance may be the same as one or more of the Referenced SOP Instances. In this case, the Frame of Reference UID (0020,0052) is the same, as shown by the Registered RCS in the figure. The registration information is a sequence of spatial transformations, potentially including deformation information. The composite of the specified spatial transformations defines the complete transformation from one RCS to the other.
Image instances may have no DICOM Frame of Reference, in which case the registration is to that single image (or frame, in the case of a Multi-frame Image). The Spatial Registration IOD may also be used to establish a coordinate system for an image that has no defined Frame of Reference. To do this, the center of the top left pixel of the source image is treated as being located at (0, 0, 0). Offsets from the first pixel are computed using the resolution specified in the Source IOD. Multiplying that coordinate by the Transformation matrix gives the patient coordinate in the new Frame of Reference.
A special case is an atlas. DICOM has defined Well-Known Frame of Reference UIDs for several common atlases. There is not necessarily image data associated with an atlas.
When using the Spatial Registration or Deformable Registration SOP Classes there are two types of coordinate systems. The coordinate system of the referenced data is the Source RCS. The coordinate system established by the SOP instance is the Registered RCS.
The sense of the direction of transformation differs between the Spatial Registration SOP Class and the Deformable Spatial Registration SOP Class. The Spatial Registration SOP Class specifies a transformation that maps Source coordinates, in the Source RCS, to Registered coordinates, in the Registered RCS. The Deformable Spatial Registration SOP Class specifies transformations that map Registered coordinates, in the Registered RCS, to coordinates in the Source RCS.
The Spatial Fiducials SOP Class stores spatial fiducials as implicit registration information.
Multi-Modality Fusion: A workstation or modality performs a registration of images from independent acquisition modalities-PET, CT, MR, NM, and US-from multiple series. The workstation stores the registration data for subsequent visualization and image processing. Such visualization may include side-by-side synchronized display, or overlay (fusion) of one modality image on the display of another. The processes for such fusion are beyond the scope of the Standard. The workstation may also create and store a ready-for-display fused image, which references both the source image instances and the registration instance that describes their alignment.
Prior Study Fusion: Using post processing or a manual process, a workstation creates a spatial object registration of the current Study's Series from prior Studies for comparative evaluation.
Atlas Mapping: A workstation or a CAD device specifies fiducials of anatomical features in the brain such as the anterior commissure, posterior commissure, and points that define the hemispheric fissure plane. The system stores this information in the Spatial Fiducials SOP Instance. Subsequent retrieval of the fiducials enables a device or workstation to register the patient images to a functional or anatomical atlas, presenting the atlas information as overlays.
CAD: A CAD device creates fiducials of features during the course of the analysis. It stores the locations of the fiducials for future analysis in another imaging procedure. In the subsequent CAD procedure, the CAD device performs a new analysis on the new data. As before, it creates comparable fiducials, which it may store in a Spatial Fiducials SOP Instance. The CAD device then performs additional analysis by registering the images of the current exam to the prior exam. It does so by correlating the fiducials of the prior and current exam. The CAD device may store the registration in Registration SOP Instance.
Adaptive Radiotherapy: A CT Scan is taken to account for variations in patient position prior to radiation therapy. A workstation performs the registration of the most recent image data to the prior data, corrects the plan, and stores the registration and revised plan.
Image Stitching: An acquisition device captures multiple images, e.g., DX images down a limb. A user identifies fiducials on each of the images. The system stores these in one or more Fiducial SOP Instances. Then the images are "stitched" together algorithmically by means that utilize the Fiducial SOP Instances as input. The result is a single image and optionally a Registration SOP Instance that indicates how the original images can be transformed to a location on the final image.
Figure O.3-1 shows the system interaction of storage operations for a registration of MR and CT using the Spatial Registration SOP Class. The Image Plane Module Attributes of the CT Series specify the spatial mapping to the RCS of its DICOM Frame of Reference.
The receiver of the Registration SOP Instance may use the spatial transformation to display or process the referenced image data in a common coordinate system. This enables interactive display in 3D during interpretation or planning, tissue classification, quantification, or Computer Aided Detection. Figure O.3-2 shows a typical interaction scenario.
In the case of coupled acquisition modalities, one acquisition device may know the spatial relationship of its image data relative to the other. The acquisition device may use the Registration SOP Class to specify the relationship of modality B images to modality A images as shown below in Figure O.3-3. In the most direct case, the data of both modalities are in the same DICOM Frame of Reference for each SOP Class Instance.
A Spatial Registration instance consists of one or more instances of a Registration. Each Registration specifies a transformation from the RCS of the Referenced Image Set, to the RCS of this Spatial Registration instance (see PS3.3) identified by the Frame of Reference UID (0020,0052).
Figure O.4-1 shows an information model of a Spatial Registration to illustrate the relationship of the Attributes to the objects of the model. The DICOM Attributes that describe each object are adjacent to the object.
Figure O.4-2 shows an information model of a Deformable Spatial Registration to illustrate the relationship of the Attributes to the objects of the model. The DICOM Attributes that describe each object are adjacent to the object.
Figure O.4-3 shows a Spatial Fiducials information model to illustrate the relationship of the Attributes to the objects of the model. The DICOM Attributes that describe each object are adjacent to the object.
A 4x4 affine transformation matrix describes spatial rotation, translation, scale changes and affine transformations that register referenced images to the Registration IE's homogeneous RCS. These steps are expressible in a single matrix, or as a sequence of multiple independent rotations, translations, or scaling, each expressed in a separate matrix. Normally, registrations are rigid body, involving only rotation and translation. Changes in scale or affine transformations occur in atlas registration or to correct minor mismatches.
Fiducials are image-derived reference markers of location, orientation, or scale. These may be labeled points or collections of points in a data volume that specify a shape. Most commonly, fiducials are individual points.
Correlated fiducials of separate image sets may serve as inputs to a registration process to estimate the spatial registration between similar objects in the images. The correlation may, or may not, be expressed in the fiducial identifiers. A fiducial identifier may be an arbitrary number or text string to uniquely identify each fiducial from others in the set. In this case, fiducial correlation relies on operator recognition and control.
Alternatively, coded concepts may identify the acquired fiducials so that systems can automatically correlate them. Examples of such coded concepts are points of a stereotactic frame, prosthesis points, or well-resolved anatomical landmarks such as bicuspid tips. Such codes could be established and used locally by a department, over a wider area by a society or research study coordinator, or from a standardized set.
The table below shows each case of identifier encoding. A and B represent two independent registrations: one to some image set A, and the other to image set B.
Fiducials may be a point or some other shape. For example, three or more arbitrarily chosen points might designate the inter-hemispheric plane for the registration of head images. Many arbitrarily chosen points may identify a surface such as the inside of the skull.
A fiducial also has a Fiducial UID. This UID identifies the creation of the fiducial and allows other SOP Instances to reference the fiducial assignment.
The Affine Transform Matrix is of the following form.
This matrix requires the bottom row to be [0 0 0 1] to preserve the homogeneous coordinates.
The matrix can be of type: RIGID, RIGID_SCALE and AFFINE. These different types represent different conditions on the allowable values for the matrix elements.
This transform requires the matrix obey orthonormal transformation properties:
for all combinations of j = 1,2,3 and k = 1,2,3 where δ = 1 for i = j and zero otherwise.
The expansion into non-matrix equations is:
The Frame of Reference Transformation Matrix AMB describes how to transform a point (Bx,By,Bz) with respect to RCSB into (Ax,Ay,Az) with respect to RCSA.
The matrix above consists of two parts: a rotation and translation as shown below;
The first column [M11,M21,M31 ] are the direction cosines (projection) of the X-axis of RCSB with respect to RCSA . The second column [M12,M22,M32] are the direction cosines (projection) of the Y-axis of RCSB with respect to RCSA. The third column [M13,M23,M33] are the direction cosines (projection) of the Z-axis of RCSB with respect to RCSA. The fourth column [T1,T2,T3] is the origin of RCSB with respect to RCSA.
There are three degrees of freedom representing rotation, and three degrees of freedom representing translation, giving a total of six degrees of freedom.
The following constraint applies:
for all combinations of j = 1,2,3 and k = 1,2,3 where δ = 1 for i=j and zero otherwise.
The expansion into non-matrix equations is:
The above equations show a simple way of extracting the spatial scaling parameters Sj from a given matrix. The units of Sj 2 is the RCS unit dimension of one millimeter.
This type can be considered a simple extension of the type RIGID. The RIGID_SCALE is easily created by pre-multiplying a RIGID matrix by a diagonal scaling matrix as follows:
where MRBWS is a matrix of type RIGID_SCALE and MRB is a matrix of type RIGID.
No constraints apply to this matrix, so it contains twelve degrees of freedom. This type of Frame of Reference Transformation Matrix allows shearing in addition to rotation, translation and scaling.
For a RIGID type of Frame of Reference Transformation Matrix, the inverse is easily computed using the following formula (inverse of an orthonormal matrix):
For RIGID_SCALE and AFFINE types of Registration Matrices, the inverse cannot be calculated using the above equation, and must be calculated using a conventional matrix inverse operation.
The templates for the Breast Imaging Report are defined in PS3.16. Relationships defined in the Breast Imaging Report templates are by-value. This template structure may be conveyed using the Enhanced SR SOP Class or the Basic Text SR SOP Class.
As shown in Figure Q.1-1, the Breast Imaging Report Narrative and Breast Imaging Report Supplementary Data sub-trees together form the Content Tree of the Breast Imaging Report.
The Breast Imaging Procedure Reported sub-tree is a mandatory child of the Supplementary Data Content Item, to describe all of the procedures to which the report applies using coded terminology. It may also be used as a sub-tree of sections within the Supplementary Data sub-tree, for the instance in which a report covers more than one procedure, but different sections of the Supplementary Data record the evidence of a subset of the procedures.
An instance of the Breast Imaging Report Narrative sub-tree contains one or more text-based report sections, with a name chosen from CID 6052 “Breast Imaging Report Section Title”. Within a report section, one or more observers may be identified. This sub-tree is intended to contain the report text as it was created, presented to, and signed off by the verifying observer. It is not intended to convey the exact rendering of the report, such as formatting or visual organization. Report text may reference one or more image or other composite objects on which the interpretation was based.
An instance of the Breast Imaging Report Supplementary Data sub-tree contains one or more of: Breast Imaging Procedure Reported, Breast Composition Section, Breast Imaging Report Finding Section, Breast Imaging Report Intervention Section, Overall Assessment. This sub-tree is intended to contain the supporting evidence for the Breast Imaging Report Narrative sub-tree, using coded terminology and numeric data.
The Breast Imaging Assessment sub-tree may be instantiated as the content of an Overall Assessment section of a report (see Figure Q.1-4), or as part of a Findings section of a report (see TID 4206 “Breast Imaging Report Finding Section”). Reports may provide an individual assessment for each Finding, and then an overall assessment based on an aggregate of the individual assessments.
The following are simple illustrations of encoding Mammography procedure based Breast Imaging Reports.
A screening mammography case, i.e., there are typically four films and no suspicious abnormalities. The result is a negative mammogram with basic reporting. This example illustrates a report encoded as narrative text only:
Example Q.2-1. Report Sample: Narrative Text Only
Film screen mammography, both breasts.
Comparison was made to exam from 11/14/2001. The breasts are heterogeneously dense. This may lower the sensitivity of mammography. No significant masses, calcifications, or other abnormalities are present. There is no significant change from the prior exam.
BI-RADS® Category 1: Negative. Recommend normal interval follow-up in 12 months
Table Q.2-1. Breast Image Report Content for Example 1
A screening mammography case, i.e., there are typically four films and no suspicious abnormalities. The result is a negative mammogram with basic reporting. This example illustrates a report encoded as narrative text with minimal supplementary data, and follows BI-RADS® and MQSA:
Example Q.2-2. Report Sample: Narrative Text with Minimal Supplementary Data
Film screen mammography, both breasts.
Comparison was made to exam from 11/14/2001.
The breasts are heterogeneously dense. This may lower the sensitivity of mammography.
No significant masses, calcifications, or other abnormalities are present. There is no significant change from the prior exam.
BI-RADS® Category 1: Negative. Recommend normal interval follow-up in 12 months.
Table Q.2-2. Breast Imaging Report Content for Example 2
A diagnostic mammogram was prompted by a clinical finding. The result is a probably benign finding with a short interval follow-up of the left breast. This report provides the narrative text with more extensive supplementary data.
Example Q.2-3. Report Sample: Narrative Text with More Extensive Supplementary Data
Film screen mammography, left breast.
Non-bloody discharge left breast.
The breast is almost entirely fat.
Film screen mammograms were performed. There are heterogeneous calcifications regionally distributed in the 1 o'clock upper outer quadrant, anterior region of the left breast. There is an increase in the number of calcifications from the prior exam.
BI-RADS® Category 3: Probably Benign Finding. Short interval follow-up of the left breast is recommended in 6 months.
Table Q.2-3. Breast Imaging Report Content for Example 3
Following a screening mammogram, the patient was asked to return for additional imaging and an ultrasound on the breast, for further evaluation of a mammographic mass. This example demonstrates a report on multiple breast imaging procedures. This report provides the narrative text with some supplementary data.
Example Q.2-4. Report Sample: Multiple Procedures, Narrative Text with Some Supplementary Data
Film screen mammography, left breast; Ultrasound procedure, left breast.
Additional evaluation requested at current screening.
Comparison was made to exam from 11/14/2001.
Film Screen Mammography: A lobular mass with obscured margins is present measuring 7mm in the upper outer quadrant.
Ultrasound demonstrates a simple cyst.
BI-RADS® Category 2: Benign, no evidence of malignancy. Normal interval follow-up of both breasts is recommended in 12 months.
Table Q.2-4. Breast Imaging Report Content for Example 4
The following use cases are the basis for the decisions made in defining the Configuration Management Profiles specified in PS3.15. Where possible specific protocols that are commonly used in IT system management are specifically identified.
When a new machine is added there need to be new entries made for:
The service staff effort needed for either of these should be minimal. To the extent feasible these parameters should be generated and installed automatically.
The need for some sort of ID is common to most of the use cases, so it is assumed that each machine has sufficient non-volatile storage to at least remember its own name for later use.
Updates may be made directly to the configuration databases or made via the machine being configured. A common procedure for large networks is for the initial network design to assign these parameters and create the initial databases during the complete initial network design. Updates can be made later as new devices are installed.
One step that specifically needs automation is the allocation of AE Titles. These must be unique. Their assignment has been a problem with manual procedures. Possibilities include:
Fully automatic allocation of AE Titles as requested. This interacts with the need for AE title stability in some use cases. The automatic process should permit AE Titles to be persistently associated with particular devices and application entities. The automatic process should permit the assignment of AE titles that comply with particular internal structuring rules.
Assisted manual allocation, where the service staff proposes AE Titles (perhaps based on examining the list of present AE Titles) and the system accepts them as unique or rejects them when non-unique.
These AE Titles can then be associated with the other application entity related information. This complete set of information needs to be provided for later uses.
The local setup may also involve searches for other AEs on the network. For example, it is likely that a search will be made for archives and printers. These searches might be by SOP class or device type. This is related to vendor specific application setup procedures, which are outside the scope of DICOM.
The network may have been designed in advance and the configuration specified in advance. It should be possible to pre-configure the configuration servers prior to other hardware installation. This should not preclude later updates or later configuration at specific devices.
The DHCP servers have a database that is manually maintained defining the relationship between machine parameters and IP parameters. This defines:
Hardware MAC addresses that are to be allocated specific fixed IP information.
Client machine names that are to be allocated specific fixed IP information.
Hardware MAC addresses and address ranges that are to be allocated dynamically assigned IP addresses and IP information.
Client machine name patterns that are to be allocated dynamically assigned IP addresses and IP information.
The IP information that is provided will be a specific IP address together with other information. The present recommendation is to provide all of the following information when available.
The manual configuration of DHCP is often assisted by automated user interface tools that are outside the scope of DICOM. Some people utilize the DHCP database as a documentation tool for documenting the assignment of IP addresses that are preset on equipment. This does not interfere with DHCP operation and can make a gradual transition from equipment presets to DHCP assignments easier. It also helps avoid accidental re-use of IP addresses that are already manually assigned. However, DHCP does not verify that these entries are in fact correct.
There are several ways that the LDAP configuration information can be obtained.
A complete installation may be pre-designed and the full configuration loaded into the LDAP server, with the installation Attribute set to false. Then as systems are installed, they acquire their own configurations from the LDAP server. The site administration can set the installation Attribute to true when appropriate.
When the LDAP server permits network clients to update the configuration, they can be individually installed and configured. Then after each device is configured, that device uploads its own configuration to the LDAP server.
When the LDAP server does not permit network clients to update configurations, they can be individually installed and configured. Then, instead of uploading their own configuration, they create a standard format file with their configuration objects. This file is then manually added to the LDAP server (complying with local security procedures) and any conflicts resolved manually.
The network may have been designed in advance and the configuration specified in advance. It should be possible to pre-configure the configuration servers prior to other hardware installation. This should not preclude later updates or later configuration at specific devices.
LDAP defines a standard file exchange format for transmitting LDAP database subsets in an ASCII format. This file exchange format can be created by a variety of network configuration tools. There are also systems that use XML tools to create database subsets that can be loaded into LDAP servers. It is out of scope to specify these tools in any detail. The use case simply requires that such tools be available.
When the LDAP database is pre-configured using these tools, it is the responsibility of the tools to ensure that the resulting database entries have unique names. The unique name requirement is common to any LDAP database and not just to DICOM AE Titles. Consequently, most tools have mechanisms to ensure that the database updates that they create do have unique names.
At an appropriate time, the installed Attribute is set on the device objects in the LDAP configuration.
The "unconfigured" device start up begins with use of the pre-configured services from DHCP, DNS, and NTP. It then performs device configuration and updates the LDAP database. This description assumes that the device has been given permission to update the LDAP database directly.
DHCP is used to obtain IP related parameters. The DHCP request can indicate a desired machine name that DHCP can associate with a configuration saved at the DHCP server. DHCP does not guarantee that the desired machine name will be granted because it might already be in use, but this mechanism is often used to maintain specific machine configurations. The DHCP will also update the DNS server (using the DDNS mechanisms) with the assigned IP address and hostname information. Legacy note: A machine with pre-configured IP addresses, DNS servers, and NTP servers may skip this step. As an operational and documentation convenience, the DHCP server database may contain the description of this pre-configured machine.
The list of NTP servers is used to initiate the NTP process for obtaining and maintaining the correct time. This is an ongoing process that continues for the duration of device activity. See Time Synchronization below.
The list of DNS servers is used to obtain the address of the DNS servers at this site. Then the DNS servers are queried to get the list of LDAP servers. This utilizes a relatively new addition to the DNS capabilities that permit querying DNS to obtain servers within a domain that provide a particular service.
The LDAP servers are queried to find the server that provides DICOM configuration services, and then obtain a description for the device matching the assigned machine name. This description includes device specific configuration information and a list of Network AEs. For the unconfigured device there will be no configuration found.
Through a device specific process it determines its internal AE structure. During initial device installation it is likely that the LDAP database lacks information regarding the device. Using some vendor specific mechanism, e.g., service procedures, the device configuration is obtained. This device configuration includes all the information that will be stored in the LDAP database. The fields for "device name" and "AE Title" are tentative at this point.
Each of the Network AE objects is created by means of the LDAP object creation process. It is at this point that LDAP determines whether the AE Title is in fact unique among all AE Titles. If the title is unique, the creation succeeds. If there is a conflict, the creation fails and "name already in use" is given as a reasonless uses propose/create as an atomic operation for creating unique items. The LDAP approach permits unique titles that comply with algorithms for structured names, check digits, etc. DICOM does not require structured names, but they are a commonplace requirement for other LDAP users. It may take multiple attempts to find an unused name. This multiple probe behavior can be a problem if "unconfigured device" is a common occurrence and name collisions are common. Name collisions can be minimized at the expense of name structure by selecting names such as "AExxxxxxxxxxxxxx" where "xxxxxxxxxxxxxx" is a truly randomly selected number. The odds of collision are then exceedingly small, and a unique name will be found within one or two probes.
The device object is created. The device information is updated to reflect the actual AE titles of the AE objects. As with AE objects, there is the potential for device name collisions.
The network connection objects are created as subordinates to the device object.
The AE objects are updated to reflect the names of the network connection objects.
The "unconfigured device" now has a saved configuration. The LDAP database reflects its present configuration.
In the following example, the new system needs two AE Titles. During its installation another machine is also being installed and takes one of the two AE Titles that the first machine expected to use. The new system then claims another different EYE-title that does not conflict.
Much of the initial start up is the same for restarting a configured device and for configuring a client first and then updating the server. The difference is two-fold.
The AE Title uniqueness must be established manually, and the configuration information saved at the client onto a file that can then be provided to the LDAP server. There is a risk that the manually assigned AE Title is not unique, but this can be managed and is easier than the present entirely manual process for assigning AE Titles.
The larger enterprise networks require prompt database responses and reliable responses during network disruptions. This implies the use of a distributed or federated database. These have update propagation issues. There is not a requirement for a complete and accurate view of the DICOM network at all times. There is a requirement that local subsets of the network maintain an accurate local view. E.g., each hospital in a large hospital chain may tolerate occasional disconnections or problems in viewing the network information in other hospitals in that chain, but they require that their own internal network be reliably and accurately described.
LDAP supports a variety of federation and distribution schemes. It specifically states that it is designed and appropriate for federated situations where distribution of updates between federated servers may be slow. It is specifically designed for situations where database updates are infrequent and database queries dominate.
Legacy devices utilize some internal method for obtaining the IP addresses, port numbers, and AE Titles of the other devices. For legacy compatibility, a managed node must be controlled so that the IP addresses, port numbers, and AE Titles do not change. This affects DHCP because it is DHCP that assigns IP addresses. The LDAP database design must preserve port number and AE Title so that once the device is configured these do not change.
DHCP was designed to deal with some common legacy issues:
Documenting legacy devices that do not utilize DHCP. Most DHCP servers can document a legacy device with a DHCP entry that describes the device. This avoids IP address conflicts. Since this is a manual process, there still remains the potential for errors. The DHCP server configuration is used to reserve the addresses and document how they are used. This documented entry approach is also used for complex multi-homed servers. These are often manually configured and kept with fixed configurations.
Specifying fixed IP addresses for DHCP clients. Many servers have clients that are not able to use DNS to obtain server IP addresses. These servers may also utilize DHCP for start up configuration. The DHCP servers must support the use of fixed IP allocations so that the servers are always assigned the same IP address. This avoids disrupting access by the server's legacy clients. This usage is quite common because it gives the IT administrators the centralized control that they need without disrupting operations. It is a frequent transitional stage for machines on networks that are transitioning to full DHCP operation.
There are two legacy-related issues with time configuration:
The NTP system operates in UTC. The device users probably want to operate in local time. This introduces additional internal software requirements to configure local time. DHCP will provide this information if that option is configured into the DHCP server.
Device clock setting must be documented correctly. Some systems set the battery-powered clock to local time; others use UTC. Incorrect settings will introduce very large time transient problems during start up. Eventually NTP clients do resolve the huge mismatch between battery clock and NTP clock, but the device may already be in medical use by the time this problem is resolved. The resulting time discontinuity can then pose problems. The magnitude of this problem depends on the particular NTP client implementation.
Managed devices can utilize the LDAP database during their own installation to establish configuration parameters such as the AE Title of destination devices. They may also utilize the LDAP database to obtain this information at run time prior to association negotiation.
The LDAP server supports simple relational queries. This query can be phrased:
Then, for each of those devices, query
The result will be the Network AE entries that match those two criteria. The first criteria selects the device type match. There are LDAP scoping controls that determine whether the queries search the entire enterprise or just this server. LDAP does not support complex queries, transactions, constraints, nesting, etc. LDAP cannot provide the hostnames for these Network AEs as part of a single query. Instead, the returned Network AEs will include the names of the network connections for each Network AE. Then the application would need to issue LDAP reads using the DN of the NetworkConnection objects to obtain the hostnames.
Normal start up of an already configured device will obtain IP information and DICOM information from the servers.
The device start up sequence is:
DHCP is used to obtain IP related parameters. The DHCP request can indicate a desired machine name that DHCP can associate with a configuration saved at the DHCP server. DHCP does not guarantee that the desired machine name will be granted because it might already be in use, but this mechanism is often used to maintain specific machine configurations. The DHCP will also update the DNS server (using the DDNS mechanisms) with the assigned IP address and hostname information. Legacy note: A machine with pre-configured IP addresses, DNS servers, and NTP servers may skip this step. As an operational and documentation convenience, the DHCP server database may contain the description of this pre-configured machine.
The list of NTP servers is used to initiate the NTP process for obtaining and maintaining the correct time. This is an ongoing process that continues for the duration of device activity. See Time Synchronization below.
The list of DNS servers is used to obtain the list of LDAP servers. This utilizes a relatively new addition to the DNS capabilities that permit querying DNS to obtain servers within a domain that provide a particular service.
The "nearest" LDAP server is queried to obtain a description for the device matching the assigned machine name. This description includes device specific configuration information and a list of Network AEs.
The AE descriptions are obtained from the LDAP server. Key information in the AE description is the assigned AE Title. The AE descriptions probably include vendor unique information in either the vendor text field or vendor extensions to the AE object. The details of this information are vendor unique. DICOM is defining a mandatory minimum capability because this will be a common need for vendors that offer dynamically configurable devices. The AE description may be present even for devices that do not support dynamic configuration. If the device has been configured with an AE Title and description that is intended to be fixed, then a description should be present in the LDAP database. The device can confirm that the description matches its stored configuration. The presence of the AE Title in the description will prevent later network activities from inadvertently re-using the same AE Title for another purpose. The degree of configurability may also vary. Many simple devices may only permit dynamic configuration of the IP address and AE Title, with all other configuration requiring local service modifications.
The device performs whatever internal operations are involved to configure itself to match the device description and AE descriptions.
At this point, the device is ready for regular operation, the DNS servers will correctly report its IP address when requested, and the LDAP server has a correct description of the device, Network AEs, and network connections.
The lease timeouts eventually release the IP address at DHCP, which can then update DNS to indicate that the host is down. Clients that utilize the hostname information in the LDAP database will initially experience reports of connection failure; and then after DNS is updated, they will get errors indicating the device is down when they attempt to use it. Clients that use the IP entry directly will experience reports of connection failure.
A device may be deliberately placed offline in the LDAP database to indicate that it is unavailable and will remain unavailable for an extended period of time. This may be utilized during system installation so that pre-configured systems can be marked as offline until the system installation is complete. It can also be used for systems that are down for extended maintenance or upgrades. It may be useful for equipment that is on mobile vans and only present for certain days.
For this purpose a separate Installed Attribute has been given to devices, Network AEs, and Network Connections so that it can be manually managed.
Medical device time requirements primarily deal with synchronization of machines on a local network or campus. There are very few requirements for accurate time (synchronized with an international reference clock). DICOM time users are usually concerned with:
Other master clocks and time references (e.g., sidereal time) are not relevant to medical users.
High accuracy time synchronization is needed for devices like cardiology equipment. The measurements taken on various different machines are recorded with synchronization modules specifying the precise time base for measurements such as waveforms and Multi-frame Images. These are later used to synchronize data for analysis and display.
Synchronized to within approximately 10 millisecond. This corresponds to a few percent of a typical heartbeat. Under some circumstances, the requirements may be stricter than this.
During the measurement period there should be no discontinuities greater than a few milliseconds. The time base rate should be within 0.01% of standard time rate.
International Time Synchronization
There are no special extra requirements. Note however that time base stability conflicts with time synchronization when UTC time jumps (e.g., leap seconds).
Ordinary medical equipment uses time synchronization to perform functions that were previously performed manually, e.g., record-keeping and scheduling. These were typically done using watches and clocks, with resultant stability and synchronization errors measured in seconds or longer. The most stringent time synchronization requirements for networked medical equipment derive from some of the security protocols and their record keeping.
Synchronized to within approximately 500 milliseconds. Some security systems have problems when the synchronization error exceeds 1 second.
Large drift errors may cause problems. Typical clock drift errors approximately 1 second/day are unlikely to cause problems. Large discontinuities are permissible if rare or during start up. Time may run backwards, but only during rare large discontinuities.
International Time Synchronization
Some sites require synchronization to within a few seconds of UTC. Others have no requirement.
The local system time of a computer is usually provided by two distinct components.
There is a battery-powered clock that is used to establish an initial time estimate when the machine is turned on. These clocks are typically very inaccurate. Local and international synchronization errors are often 5-10 minutes. In some cases, the battery clock is incorrect by hours or days.
The ongoing system time is provided by a software function and a pulse source. The pulse source "ticks" at some rate between 1-1000Hz. It has a nominal tick rate that is used by the system software. For every tick the system software increments the current time estimate appropriately. E.g., for a system with a 100Hz tick, the system time increments 10ms each tick.
This lacks any external synchronization and is subject to substantial initial error in the time estimate and to errors due to systematic and random drift in the tick source. The tick sources are typically low cost quartz crystal based, with a systematic error up to approximately 10-5 in the actual versus nominal tick rate and with a variation due to temperature, pressure, etc. up to approximately 10-5. This corresponds to drifts on the order of 10 seconds per day.
There is a well established Internet protocol (NTP) for maintaining time synchronization that should be used by DICOM. It operates in several ways.
The most common is for the computer to become an NTP client of one or more NTP servers. As a client it uses occasional ping-pong NTP messages to:
Estimate the network delays. These estimates are updated during each NTP update cycle.
Obtain a time estimate from the server. Each estimate includes the server's own statistical characteristics and accuracy assessment of the estimate.
Use the time estimates from the servers, the network delay estimates, and the time estimates from the local system clock, to obtain a new NTP time estimate. This typically uses modern statistical methods and filtering to perform optimal estimation.
The local applications do not normally communicate with the NTP client software. They normally continue to use the system clock services. The NTP client software adjusts the system clock. The NTP standard defines a nominal system clock service as having two adjustable parameters:
The clock frequency. In the example above, the nominal clock was 100Hz, with a nominal increment of 10 milliseconds. Long term measurement may indicate that the actual clock is slightly faster and the NTP client can adjust the clock increment to be 9.98 milliseconds.
The clock phase. This adjustment permits jump adjustments, and is the fixed time offset between the internal clock and the estimated UTC.
The experience with NTP in the field is that NTP clients on the same LAN as their NTP server will maintain synchronization to within approximately 100 microseconds. NTP clients on the North American Internet and utilizing multiple NTP servers will maintain synchronization to within approximately 10 milliseconds.
There are low cost devices with only limited time synchronization needs. NTP has been updated to include SNTP for these devices. SNTP eliminates the estimation of network delays and eliminates the statistical methods for optimal time estimation. It assumes that the network delays are nil and that each NTP server time estimate received is completely accurate. This reduces the development and hardware costs for these devices. The computer processing costs for NTP are insignificant for a PC, but may be burdensome for very small devices. The SNTP synchronization errors are only a few milliseconds in a LAN environment. They are very topology sensitive and errors may become huge in a WAN environment.
Most NTP servers are in turn NTP clients to multiple superior servers and peers. NTP is designed to accommodate a hierarchy of server/clients that distributes time information from a few international standard clocks out through layers of servers.
The NTP implementations anticipate the use of three major kinds of external clock sources:
Many ISPs and government agencies offer access to NTP servers that are in turn synchronized with the international standard clocks. This access is usually offered on a restricted basis.
The US, Canada, Germany, and others offer radio broadcasts of time signals that may be used by local receivers attached to an NTP server. The US and Russia broadcast time signals from satellites, e.g., GPS. Some mobile telephone services broadcast time signals. These signals are synchronized with the international standard clocks. GPS time signals are popular worldwide time sources. Their primary problem is difficulties with proper antenna location and receiver cost. Most of the popular low cost consumer GPS systems save money by sacrificing the clock accuracy.
For extremely high accuracy synchronization, atomic clocks can be attached to NTP servers. These clocks do not provide a time estimate, but they provide a pulse signal that is known to be extremely accurate. The optimal estimation logic can use this in combination with other external sources to achieve sub microsecond synchronization to a reference clock even when the devices are separated by the earth's diameter.
The details regarding selecting an external clock source and appropriate use of the clock source are outside the scope of the NTP protocol. They are often discussed and documented in conjunction with the NTP protocol and many such interfaces are included in the reference implementation of NTP.
In theory, servers can be SNTP servers and NTP servers can be SNTP clients of other servers. This is very strongly discouraged. The SNTP errors can be substantial, and the clients of a server using SNTP will not have the statistical information needed to assess the magnitude of these errors. It is feasible for SNTP clients to use NTP servers. The SNTP protocol packets are identical to the NTP protocol packets. SNTP differs in that some of the statistical information fields are filled with nominal SNTP values instead of having actual measured values.
There are several public reference implementations of NTP server and client software available. These are in widespread use and have been ported to many platforms (including Unix, Windows, and Macintosh). There are also proprietary and built-in NTP services for some platforms (e.g., Windows 2000). The public reference implementations include sample interfaces to many kinds of external clock sources.
There are significant performance considerations in the selection of locations for servers and clients. Devices that need high accuracy synchronization should probably be all on the same LAN together with an NTP server on that LAN.
Real time operating system (RTOS) implementations may have greater difficulties. The reference NTP implementations have been ported to several RTOSs. There were difficulties with the implementations of the internal system clock on the RTOS. The dual frequency/phase adjustment requirements may require the clock functions to be rewritten. The reference implementations also require access to a separate high resolution interval timer (with sub microsecond accuracy and precision). This is a standard CPU feature for modern workstation processors, but may be missing on low end processors.
An RTOS implementation with only ordinary synchronization requirements might choose to write their own SNTP only implementation rather than use the reference NTP implementation. The SNTP client is very simple. It may be based on the reference implementation or written from scratch. The operating system support needed for accurate adjustment is optional for SNTP clients. The only requirement is the time base stability requirement, which usually implies the ability to specify fractional seconds when setting the time.
The conflict between the user desire to use local time and the NTP use of UTC must be resolved in the device. DHCP offers the ability to obtain the offset between local time and UTC dynamically, provided the DHCP server supports this option. There remain issues such as service procedures, start up in the absence of DHCP, etc.
The differences between local time, UTC, summer time, etc. are a common source of confusion and errors setting the battery clock. The NTP algorithms will eventually resolve these errors, but the final convergence on correct time may be significantly delayed. The device might be ready for medical use before these errors are resolved.
There will usually be a period of time where a network will have some applications that utilize the configuration management protocols coexisting with applications that are only manually configured. The transition issues arise when a legacy Association Requester interacts with a managed Association Acceptor or when a managed Association Requester interacts with a legacy Association Acceptor. Some of these issues also arise when the Association Requester and Association Acceptor support different configuration management profiles. These are discussed below and some general recommendations made for techniques that simplify the transition to a fully configuration managed network.
The legacy Association Requester requires that the IP address of the Association Acceptor not change dynamically because it lacks the ability to utilize DNS to obtain the current IP address of the Association Acceptor. The legacy Association Requester also requires that the AE Title of the Association Acceptor be provided manually.
The DHCP server should be configurable with a database of hostname, IP, and MAC address relationships. The DHCP server can be configured to provide the same IP address every time that a particular machine requests an IP address. This is a common requirement for Association Acceptors that obtain IP addresses from DHCP. The Association Acceptor may be identified by either the hardware MAC address or the hostname requested by the Association Acceptor.
The IP address can be permanently assigned as a static IP address so that legacy Association Requester can be configured to use that IP address while managed Association Requester can utilize the DNS services to obtain its IP address.
No specific actions are needed, although see below for the potential that the DHCP server does not perform DDNS updates.
Although the managed Association Acceptor may obtain information from the LDAP server, the legacy Association Requester will not. This means that the legacy mechanisms for establishing EYE-Titles and related information on the Association Requester will need to be coordinated manually. Most LDAP products have suitable GUI mechanisms for examining and updating the LDAP database. These are not specified by this Standard.
An LDAP entry for the Association Requester should be manually created, although this may be a very abbreviated entry. It is needed so that the EYE-Title mechanisms can maintain unique AE Titles. There must be entries created for each of the AEs on the legacy Association Requester.
The legacy Association Requester will need to be configured based on manual examination of the LDAP information for the server and using the legacy procedures for that Association Requester.
The DHCP server may need to be configured with a pre-assigned IP address for the Association Requester if the legacy Association Acceptor restricts access by IP addresses. Otherwise no special actions are needed.
The legacy Association Acceptor hostname and IP address should be manually placed into the DNS database.
The LDAP server should be configured with a full description of the legacy Association Acceptor, even though the Association Acceptor itself cannot provide this information. This will need to be done manually, most likely using GUI tools. The legacy Association Acceptor will need to be manually configured to match the EYE-Titles and other configuration information.
In the event that the DHCP server or DNS server do not support or permit DDNS updates, then the DNS server database will need to be manually configured. Also, because these updates are not occurring, all of the machines should have fixed pre-assigned IP addresses. This is not strictly necessary for clients, since they will not have incoming DICOM connections, but may be needed for other reasons. In practice maintaining this file is very similar to the maintenance of the older hostname files. There is still a significant administrative gain because only the DNS and DHCP configuration files need to be maintained, instead of maintaining files on each of the servers and clients
It is likely that some devices will support only some of the system management profiles. A typical example of such partial support is a node that supports:
Configurations like this are common because many operating system platforms provide complete tools for implementing these clients. The support for LDAP Client requires application support and is often released on a different cycle than the operating system support. These devices will still have their DICOM application manually configured, but will utilize the DHCP, DNS, and NTP services.
The addition of the first fully managed device to a legacy network requires both server setup and device setup.
The managed node requires that servers be installed or assigned to provide the following actors:
These may be existing servers that need only administrative additions, they may be existing hardware that has new software added, and these may be one or multiple different systems. DHCP, DNS, and NTP services are provided by a very wide variety of equipment.
The NTP server location relative to this device should be reviewed to be sure that it meets the timing requirements of the device. If it is an NTP client with a time accuracy requirement of approximately 1 second, almost any NTP server location will be acceptable. For SNTP clients and devices with high time accuracy requirements, it is possible that an additional NTP server or network topology adjustment may be needed.
If the NTP server is using secured time information, certificates or passwords may need to be exchanged.
There are advantages to documenting the unmanaged nodes in the DHCP database. This is not critical for operations, but it helps avoid administrative errors. Most DHCP servers support the definition of pre-allocated static IP addresses. The unmanaged nodes can be documented by including entries for static IP addresses for the unmanaged nodes. These nodes will not be using the DHCP server initially, but having their entries in the DHCP database helps reduce errors and simplifies gradual transitions. The DHCP database can be used to document the manually assigned IP addresses in a way that avoids unintentional duplication.
The managed node must be documented in the DHCP database. The NTP and DNS server locations must be specified.
If this device is an association acceptor it probably should be assigned a fixed IP address. Many legacy devices cannot operate properly when communicating with devices that have dynamically assigned IP addresses. The legacy device does not utilize the DNS system, so the DDNS updates that maintain the changing IP address are not available. So most managed nodes that are association acceptors must be assigned a static IP address. The DHCP system still provides the IP address to the device during the boot process, but it is configured to always provide the same IP address every time. The legacy systems are configured to use that IP address.
Most DNS servers have a database for hostname to IP relationships that is similar to the DHCP database. The unmanaged devices that will be used by the managed node must have entries in this database so that machine IP addresses can be found. It is often convenient to document all of the hostnames and IP addresses for the network into the DNS database. This is a fairly routine administrative task and can be done for the entire network and maintained manually as devices are added, moved, or removed. There are many administrative tools that expect DNS information about all network devices, and this makes that information available.
If DDNS updates are being used, the manually maintained portion of the DNS database must be adjusted to avoid conflicts.
There must be DNS entries provided for every device that will be used by the managed node.
The LDAP database should be configured to include device descriptions for this managed device, and there should be descriptions for the other devices that this device will communicate with. The first portion is used by this device during its start up configuration process. The second portion is used by this device to find the services that it will use.
The basic structural components of the DICOM information must be present on the LDAP server so that this device can find the DICOM root and its own entry. It is a good idea to fully populate the AE Title registry so that as managed devices are added there are no AE Title conflicts.
This device needs to be able to find the association acceptors (usually SCPs) that it will use during normal operation. These may need to be manually configured into the LDAP server. Their descriptions can be highly incomplete if these other devices are not managed devices. Only enough information is needed to meet the needs of this device. If this device is manually configured and makes no LDAP queries to find services, then none of the other device descriptions are needed.
There are some advantages to manually maintaining the LDAP database for unmanaged devices. This can document the manually assigned AE Titles. The service and network connection information can be very useful during network planning and troubleshooting. The database can also be useful during service operations on unmanaged devices as a documentation aid. The decision whether to use the LDAP database as a documentation aid often depends upon the features provided with the LDAP server. If it has good tools for manually updating the LDAP database and good tools for querying and reporting, it is often a good investment to create a manually maintained LDAP database.
During the transition period devices will be switched from unmanaged to managed. This may be done in stages, with the LDAP client transition being done at a different time than the DHCP, DNS, and NTP client. This section describes a switch that changes a device from completely unmanaged to a fully managed device. The device itself may be completely replaced or simply have a software upgrade. Details of how the device is switched are not important.
If the device was documented as part of an initial full network documentation process, the entries in the DHCP and DNS databases need to be checked. If the entry is missing, wrong, or incomplete, it must be corrected in the DHCP and DNS databases. If the entries are correct, then no changes are needed to those servers. The device can simply start using the servers. The only synchronization requirement is that the DHCP and DNS servers be updated before the device, so these can be scheduled as convenient.
If the device is going to be dynamically assigned an IP address by the DHCP server, then the DNS server database should be updated to reflect that DDNS is now going to be used for this device. This update should not be made ahead of time. It should be made when the device is updated.
The NTP server location relative to this device should be reviewed to be sure that it meets the timing requirements of the device. If it is an NTP client with a time accuracy requirement of approximately 1 second, almost any NTP server location will be acceptable. For SNTP clients and devices with high time accuracy requirements, it is possible that an additional NTP server or network topology adjustment may be needed.
If the NTP server is using secured time information, certificates or passwords may need to be exchanged.
The association acceptors may be able to simply utilize the configuration information from the LDAP database, but it is likely that further configuration will be needed. Unmanaged nodes probably have only a minimal configuration in the database.
The Diameter Symmetry of a Stenosis is a parameter determining the symmetry in arterial plaque distribution.
The Symmetry Index is defined by: a / b where a is smaller or equal to than b . a and b are measured in the reconstructed artery at the position of the minimal luminal diameter.
Possible values of symmetry range from 0 to 1, where 0 indicates complete asymmetry and 1 indicates complete symmetry.
Reference: Quantitative coronary arteriography; physiological aspects, page 102-103 in: Reiber and Serruys, Quantitative coronary arteriography, 1991
To compare the quantitative results with those provided by the usual visual interpretation, the left ventricular boundary is divided into 5 anatomical regions, denoted:
The computer-defined obstruction analysis calculates the reconstruction diameter based on the diameters outside the stenotic segment. This method is completely automated and user independent. The reconstructed diameter represents the diameters of the artery had the obstruction not been present.
The proximal and distal borders of the stenotic segment are automatically calculated.
The difference between the detected contour and the reconstructed contour inside the reconstructed diameter contour is considered to be the plaque.
Based on the reconstruction diameter at the Minimum Luminal Diameter (MLD) position a reference diameter for the obstruction is defined.
The interpolated reference obstruction analysis calculates a reconstruction diameter for each position in the analyzed artery. This reconstructed diameter represents the diameters of the artery when no disease would be present. The reconstruction diameter is a line fitted through at least two user-defined reference markers by linear interpolation.
By default two references are used at the positions of the reference markers are automatically positioned at 5% and 95% of the artery length.
To calculate a percentage diameter stenosis the reference diameter for the obstruction is defined as the reconstructed diameter at the position of the MLD.
In cases where the proximal and distal part of the analyzed artery have a stable diameter during the treatment and long-term follow-up, this method will produce a stable reference diameter for all positions in the artery.
A vessel segment length as seen in the image is not always indicated as the same X-axis difference in the graph.
The X-axis of the graph is based on pixel positions on the midline and these points are not necessarily equidistant. This is caused by the fact that vessels do not only run perfectly horizontally or vertically, but also at angles.
When a vessel midline is covering a number of pixel positions perfectly horizontal or vertical, it will cover less space in mm compared to a vessel that covers the same number of pixel positions under an angle. When a segment runs perfectly horizontal or vertical, the segment length is equal to the amount of midline pixel points times the pixel separation (each point of the midline is separated exactly the pixel spacing in mm) and the points on the X-axis also represent exactly one pixel space. This is not the case when the vessel runs under an angle. For example an artery that is positioned at a 45 angle, the distance between two points on the midline is 0.7 times the pixel spacing.
As example, the artery consists of 10 elements (n =10); each has a length of 1mm (pixel size). If the MLD was exactly in the center of the artery you would expect the length from 0 to the MLD would be 5 sub segments long, thus 5 mm. This is true if the artery runs horizontal or vertically (assumed aspect ratio is 1).
If the artery is positioned in a 45º angle then the length of each element is √2 times the pixel size compared to the previous example. Thus the length depends on the angle of the artery.
The following use cases are examples of how the DICOM Ophthalmology Photography objects may be used. These use cases utilize the term "picture" or "pictures" to avoid using the DICOM terminology of frame, instance or image. In the use cases, the series means DICOM Series.
An N-spot retinal photography exam means that "N" predefined locations on the retina are examined.
A routine N-spot retinal photography exam is ordered for both eyes. There is nothing unusual observed during the exam, so N pictures are taken of each retina. This healthcare facility also specifies that in an N-spot exam a routine external picture is captured of both eyes, that the current intraocular pressure (IOP) is measured, and that the current refractive state is measured.
2N pictures of the retina and one external picture. Each retinal picture is labeled in the acquisition information to indicate its position in the local N-spot definition. The series is not labeled, each picture is labeled OS or OD as appropriate.
In the acquisition information of every picture, the IOP and refractive state information is replicated.
Since there are no stereo pictures taken, there is no Stereometric Relationship IOD instance created.
A routine N-spot retinal photography exam is ordered for both eyes. During the exam a lesion is observed in the right eye. The lesion spans several spots, so an additional wide angle view is taken that captures the complete lesion. Additional narrow angle views of the lesion are captured in stereo. After completing the N-spot exam, several slit lamp pictures are taken to further detail the lesion outline.
2N pictures of the retina and one external picture, one additional wide angle picture of the abnormal retina, 2M additional pictures for the stereo detail of the abnormal retina, and several slit lamp pictures of the abnormal eye. The different lenses and lighting parameters are documented in the acquisition information for each picture.
One instance of a Stereometric Relationship IOD, indicating which of the stereo detail pictures above should be used as stereo pairs.
A routine fluorescein exam is ordered for one eye. The procedure includes:
Routine stereo N-spot pictures of both eyes, routine external picture, and current IOP.
Reference stereo picture of each eye using filtered lighting
Capture of 20 stereo pairs with about 0.7 seconds between pictures in a pair and 3-5 seconds between pairs.
Stereo pair capture of each eye at increasing intervals for the next 10 minutes, taking a total of 8 pairs for each eye.
Four pictures taken with filtered lighting (documented in acquisition information) that constitute a stereo pair for each eye.
40 pictures (20 pairs) for one eye of near term fluorescein. These include the acquisition information, lighting context, and time stamp.
32 pictures (8 pairs for each eye) of long term fluorescein. These include acquisition information, lighting context, and time stamp.
One Stereometric Relationship IOD, indicating which of the above OP instances should be used as stereo pairs.
The pictures of a) through d) may or may not be in the same series.
The patient presents with a generic eye complaint. Visual examination reveals a possible abrasion. The general appearance of the eyes is documented with a wide angle shot, followed by several detailed pictures of the ocular surface. A topical stain is applied to reveal details of the surface lesion, followed by several additional pictures. Due to the nature of the examination, no basic ophthalmic measurements were taken.
The result is a study with one or more series that contains:
The patient is suspected of a nervous system injury. A series of external pictures are taken with the patient given instructions to follow a light with his eyes. For each picture the location of the light is indicated by the patient intent information, (e.g., above, below, patient left, patient right).
The result is a study with one or more series that contains:
Patient is suspected of myaesthenia gravis. Both eyes are imaged in normal situation. Then after Tensilon® (edrophonium chloride) injection a series of pictures is taken. The time, amount, and method of Tensilon® (edrophonium chloride) administration is captured in the acquisition information. The time stamps of the pictures are used in conjunction with the behavior of the eyelids to assess the state of the disease.
The result is a study with one or more series that contains:
A stereo optic disk examination is ordered for a patient with glaucoma. For this examination, the IOP does not need to be measured. The procedure includes:
Ophthalmic mapping usually occurs in the posterior region of the fundus, typically in the macula or the optic disc. However, this or other imaging may occur anywhere in the fundus. The mapping data has clinical relevance only in the context of its location in the fundus, so this must be appropriately defined. CID 4207 “Ophthalmic Image Position” codes and the ocular fundus locations they represent are defined by anatomical landmarks and are described using conventional anatomic references, e.g., superior, inferior, temporal, and nasal. Figure U.1.8-1 is a schematic representation of the fundus of the left eye, and provides additional clarification of the anatomic references used in the image location definitions. A schematic of the right eye is omitted since it is identical to the left eye, except horizontally reversed (Temporal→Nasal, Nasal→Temporal).
The spatial precision of the following location definitions vary depending upon their specific reference. Any location that is described as "centered" is assumed to be positioned in the center of the referenced anatomy. However, the center of the macula can be defined visually with more precision than that of the disc or a lesion. The locations without a "center" reference are approximations of the general quadrant in which the image resides.
Following are general definitions used to understand the terminology used in the code definitions.
Central zone - a circular region centered vertically on the macula and extending one disc diameter nasal to the nasal margin of the disc and four disc diameters temporal to the temporal margin of the disc.
Equator - the border between the mid-periphery and periphery of the retinal and corresponding to a circle approximately coincident with the ampulae of the vortex veins
Superior - any region that is located superiorly to a horizontal line bisecting the macula
Inferior - any region that is located inferiorly to a horizontal line bisecting the macula
Temporal - any region that is located temporally to a vertical line bisecting the macula
Nasal - any region that is located nasally to a vertical line bisecting the macula
Mid-periphery - A circular zone of the retina extending from the central zone to the equator
Periphery - A zone of the retinal extending from the equator to the ora serrata.
Ora Serrata - the most anterior extent and termination of the retina
Figure U.1.8-1 illustrates anatomical representation of defined regions of the fundus of the left eye according to anatomical markers. The right eye has the same representations but reversed horizontally so that temporal and nasal are reversed with the macula remaining temporal to the disc.
Modified after Welch Allyn: http://www.welchallyn.com/wafor/students/Optometry-Students/BIO-Tutorial/BIO-Observation.htm.
The following shows the proposed sequence of events using individual images that are captured for later stereo viewing, with the stereo viewing relationships captured in the stereometric relationship instance.
The instances captured are all time stamped so that the fluorescein progress can be measured accurately. The acquisition and equipment information captures the different setups that are in use:
Acquisition information A is the ordinary illumination and planned lenses for the examination.
Acquisition information B is the filtered illumination, filtered viewing, and lenses appropriate for the fluorescein examination.
Acquisition information C indicates no change to the equipment settings, but once the injection is made, the subsequent images include the drug, method, dose, and time of delivery.
Optical tomography uses the back scattering of light to provide cross-sectional images of ocular structures. Visible (or near-visible) light works well for imaging the eye because many important structures are optically transparent (cornea, aqueous humor, lens, vitreous humor, and retina - see Figure U.3-1).
To provide analogy to ultrasound imaging, the terms A-scan and B-scan are used to describe optical tomography images. In this setting, an A-scan is the image acquired by passing a single beam of light through the structure of interest. An A-scan image represents the optical reflectivity of the imaged tissue along the path of that beam - a one-dimensional view through the structure. A B-scan is then created from a collection of adjacent A-scan images - a two dimensional image. It is also possible to combine multiple B-scans into a 3-dimensional image of the tissue.
When using optical tomography in the eye it is desirable to have information about the anatomic and physiologic state of the eye. Measurements like the patient's refractive error and axial eye length are frequently important for calculating magnification or minification of images. The accommodative state and application of pupil dilating medications are important when imaging the anterior segment of the eye as they each cause shifts in the relative positions of ocular structures. The use of dilating medications is also relevant when imaging posterior segment structures because a small pupil can account for poor image quality.
Ophthalmic tomography may be used to plan placement of a phakic intraocular lens (IOL). A phakic IOL is a synthetic lens placed in the anterior segment of the eye in someone who still has their natural crystalline lens (i.e., they are "phakic"). This procedure is done to correct the patient's refractive error, typically a high degree of myopia (near-sightedness). The exam will typically be performed on both eyes, and each eye may be examined in a relaxed and accommodated state. Refractive information for each eye is required to interpret the tomographic study.
A study consists of one or more B-scans (see Figure U.3-2) and one or more instances of refractive state information. There may be a reference image of the eye associated with each B-scan that shows the position of the scan on the eye.
The anterior chamber angle is defined by the angle between the iris and cornea where they meet the sclera. This anatomic feature is important in people with narrow angles. Since the drainage of aqueous humor occurs in the angle, a significantly narrow angle can impede outflow and result in increased intraocular pressure. Chronically elevated intraocular pressures can result in glaucoma. Ophthalmic tomography represents one way of assessing the anterior chamber angle.
B-scans are obtained of the anterior segment including the cornea and iris. Scans may be taken at multiple angles in each eye (see Figure U.3-2). A reference image may be acquired at the time of each B-scan(s). Accommodative and refractive state information are also important for interpretation of the resulting tomographic information.
Note in the Figure the ability to characterize the narrow angle between the iris and peripheral cornea.
As a transparent structure located at the front of the eye, the cornea is ideally suited to optical tomography. There are multiple disease states including glaucoma and corneal edema where the thickness of the cornea is relevant and tomography can provide this information using one or more B-scans taken at different angles relative to an axis through the center of the cornea.
Tomography is also useful for defining the curvature of the cornea. Accurate measurements of the anterior and posterior curvatures are important in diseases like keratoconus (where the cornea "bulges" abnormally) and in the correction of refractive error via surgery or contact lenses. Measurements of corneal curvature can be derived from multiple B-scans taken at different angles through the center of the cornea.
In both cases, a photograph of the imaged structure may be associated with each B-scan image.
The Retinal Nerve Fiber Layer (RNFL) is made up of the axons of the ganglion cells of the retina. These axons exit the eye as the optic nerve carrying visual signals to the brain. RNFL thinning is a sign of glaucoma and other optic nerve diseases.
An ophthalmic tomography study contains one or more circular scans, perhaps at varying distances from the optic nerve. Each circular scan can be "unfolded" and treated as a B-scan used to assess the thickness of the nerve fiber layer (see Figure U.3-3). A fundus image that shows the scan location on the retina may be associated with each B-scan. To detect a loss of retinal nerve fiber cells the exam might be repeated one or multiple times over some period of time. The change in thickness of the nerve fiber tissue or a trend (serial plot of thickness data) might be used to support the diagnosis.
In the Figure, the pseudo-colored image on the left shows the various layers of the retina in cross section with the nerve fiber layer between the two white lines. The location of the scan is indicated by the bright circle in the photograph on the right.
The macula is located roughly in the center of the retina, temporal to the optic nerve. It is a small and highly sensitive part of the retina responsible for detailed central vision. Many common ophthalmic diseases affect the macula, frequently impacting the thickness of different layers in the macula. A series of scans through the macula can be used to assess those layers (see Figure U.3-4).
A study may contain a series of B-scans. A fundus image showing the scan location(s) on the retina may be associated with one or more B-scans. In the Figure, the corresponding fundus photograph is in the upper left.
Figure U.3-4. Example of a macular scan showing a series of B-scans collected at six different angles
Some color retinal imaging studies are done to determine vascular caliber of retinal vessels, which can vary throughout the cardiac cycle. Images are captured while connected to an ECG machine or a cardiac pulse monitor allowing image acquisition to be synchronized to the cardiac cycle.
Angiography is a procedure that requires a dye to be injected into the patient for the purpose of enhancing the imaging of vascular structures in the eye. A standard step in this procedure is imaging the eye at specified intervals to detect the pooling of small amounts of dye and/or blood in the retina. For a doctor or technician to properly interpret angiography images it is important to know how much time had elapsed between the dye being injected in the patient (time 0) and the image frame being taken. It is known that such dyes can have an affect on OPT tomographic images as well (and it may be possible to use such dyes to enhance vascular structure in the OPT images), therefore time synchronization will be applied to the creation of the OPT images as well as any associated OP images
The angiographic acquisition is instantiated as a multi-frame OPT Image. The variable time increments between frames of the image are captured in the Frame Time Vector of the OPT Multi-frame Module. For multiple sets of images, e.g., sets of retinal scan images, the Slice Location Vector will be used in addition to the Frame Time Vector. For 5 sets of 6 scans there will be 30 frames in the Multi-frame Image. The first 6 values in the Frame Time Vector will give the time from injection to the first set of scans, the second 6 will contain the time interval for the second set of 6 scans, and so on, for a total of 5 time intervals.
Another example of an angiographic study with related sets of images is a sequence of SLO/OCT/"ICG filtered" image triples (or SLO/OCT image pairs) that are time-stamped relative to a user-defined event. This user-defined event usually corresponds to the inject time of ICG (indocyanine green) into the patients blood stream. The resultant images form an angiography study where the patient's blood flow can be observed with the "ICG filtered" images and can be correlated with the pathologies observed in the SLO and OCT images that are spatially related to the ICG image with a pixel-to-pixel correspondence on the X-Y plane.
The prognosis of some pathologies can be aided by a 3D visualization of the affected areas of the eye. For example, in certain cases the density of cystic formations or the amount of drusen present can be hard to ascertain from a series of unrelated two-dimensional longitudinal images of the eye. However, some OCT machines are capable of taking a sequence of spatially related two-dimensional images in a suitably short period of time. These images can either be oriented longitudinally (perpendicular to the retina) or transversely (near-parallel to the retina). Once such a sequence has been captured, it then becomes possible for the examined volume of data to be reconstructed for an interactive 3D inspection by a user of the system (see Figure U.3-5). It is also possible for measurements, including volumes, to be calculated based on the 3D data.
A reference image is often combined with the OCT data to provide a means of registering the 3D OCT data-set with a location on the surface of the retina (see Figure U.3-6 and Figure U.3-7).
While the majority of ophthalmic tomography imaging consists of sets of longitudinal images (also known as B scans or line scans), transverse images (also known as coronal or "en face" images) can also provide useful information in determining the full extent of the volume affected by pathology.
Longitudinal images are oriented in a manner that is perpendicular to the structure being examined, while transverse images are oriented in an "en face" or near parallel fashion through the structure being examined.
Transverse images can be obtained from a directly as a single scan (as shown in Figure U.3-8 and Figure U.3-9) or they can also be reconstructed from 3D data (as shown in Figure U.3-10 and Figure U.3-11). A sequence of transverse images can also be combined to form 3D data.
Figure U.3-9. Correlation between a Transverse OCT Image and a Reference Image Obtained Simultaneously
Figure U.3-8, Figure U.3-9, Figure U.3-10 and Figure U.3-11 are all images of the same pathology in the same eye, but the two different orientations provide complementary information about the size and shape of the pathology being examined. For example, when examining macular holes, determining the amount of surrounding cystic formation is important aid in the following treatment. Determining the extent of such cystic formation is much more easily ascertained using transverse images rather than longitudinal images. Transverse images are also very useful in locating micro-pathologies such as covered macular holes, which may be overlooked using conventional longitudinal imaging.
In Figure U.3-10, the blue green and pink lines show the correspondence of the three images. In Figure U.3-11, the Transverse image is highlighted in yellow.
The Hanging Protocol Composite IOD contains information about user viewing preferences, related to image display station (workstation) capabilities. The associated Service Classes support the storage (C-STORE), query (C-FIND) and retrieve (C-MOVE and C-GET) of Hanging Protocol Instances between servers and workstations. The goal is for users to be able to conveniently define their preferred methods of presentation and interaction for different types of viewing circumstances once, and then to automatically layout image sets according to the users' preferences on workstations of similar capability.
The primary expectation is to facilitate the automatic and consistent hanging of images according to definitions provided by the users, sites or vendors of the workstations by providing the capability to:
Search for Hanging Protocols by name, level (single user, user group, site, manufacturer), user identification code, modality, anatomy, and laterality.
Allow automatic hanging of image sets to occur for all studies on workstations with sufficiently compatible capabilities by matching against user or site defined Hanging Protocols. This includes supporting automatic hanging when the user reads from different locations, or on different but similar workstation types.
How relevant image sets (e.g., from the current and prior studies) are obtained is not defined by the Hanging Protocol IOD or Service Classes.
Conformance with the DICOM Grayscale Standard Display Function and the DICOM Softcopy Presentation States in conjunction with the Hanging Protocol IOD allows the complete picture of what the users see, and how they interact with it, to be defined, stored and reproduced as similarly as possible, independent of workstation type. Further, it is anticipated that implementers will make it easy for users to point to a graphical representation of what they want (such as 4x1 versus 12x1 format with a horizontal alternator scroll mechanism) and select it.
User A sits down at workstation X, with two 1024x1280 resolution screens (Figure V.1-1) that recently has been installed and hence has no user specific Hanging Protocols defined. The user brings up the list of studies to be read and selects the first study, a chest CT, together with the relevant prior studies. The workstation queries the Hanging Protocol Query SCP for instances of the Hanging Protocol Storage SOP Class. It finds none for this specific user, but matches a site specific Hanging Protocol Instance, which was set up when the workstation was installed at the site. It applies the site Hanging Protocol Instance, and the user reads the current study in comparison to the prior studies.
The user decides to customize the viewing style, and uses the viewing application to define what type of Hanging Protocol is preferred (layout style, interaction style) by pointing and clicking on graphical representations of the choices. The user chooses a 3-column by 4-row tiled presentation with a "vertical alternator" interaction, and a default scroll amount of one row of images. The user places the current study on the left screen, and the prior study on the right screen. The user requests the application to save this Hanging Protocol, which causes the new Hanging Protocol Instance to be stored to the Hanging Protocol Storage SCP.
When the same user comes back the next day to read chest CT studies at workstation X and a study is selected, the application queries the Hanging Protocol Query SCP to determine which Hanging Protocol Instances best match the scenario of this user on this workstation for this study. The best match returned by the SCP in response to the query is with the user ID matching his user ID, the study type matched to the study type(s) of the image set selected for viewing, and the screen types matching the workstation in use.
A list of matches is produced, with the Hanging Protocol Instance that the user defined yesterday for chest CT matching the best, and the current CT study is automatically displayed on the left screen with that Hanging Protocol. Alternative next best matches are available to the user via the application interface's pull-down menu list of all closely matching Hanging Protocol Instances.
Because this Hanging Protocol defines an additional image set, the prior year's chest CT study for the same patient is displayed next to the current study, on the right screen.
The next week, the same user reads chest CTs at a different site in the same enterprise on a similar type workstation, workstation Y, from a different vendor. The workstation has a single 2048x2560 screen (Figure V.1-1). This workstation queries the Hanging Protocol Query SCP, and retrieves matching Hanging Protocol Instances, choosing as the best match the Hanging Protocol Instance used on workstation X before by user A. This Hanging Protocol is automatically applied to display the chest CT study. The current chest CT study is displayed on the left half of the 2048x2560 screen, and the prior chest CT study is displayed on the right half of the screen, with 3 columns and 8 rows each, maintaining the same vertical alternator layout. The sequence of communications between the workstations and the SCP is depicted in Figure V.1-2.
The overall process flow of Hanging Protocols can be seen in Figure V.2-1, and consists of three main steps: selection, processing, and layout. The selection is defined in the Section C.23.1 “Hanging Protocol Definition Module” in PS3.3 . The processing and layout are defined in the Section C.23.3 “Hanging Protocol Display Module” in PS3.3 . The first process step, the selection of sets of images that need to be available from DICOM image objects, is defined by the Image Sets Sequence of the Section C.23.1 “Hanging Protocol Definition Module” in PS3.3 . This is a N:M mapping, with multiple image sets potentially drawing from the same image objects.
The second part of the process flow consists of the filtering, reformatting, sorting, and presentation intent operations that map the Image Sets into their final form, the Display Sets. This is defined in the Section C.23.3 “Hanging Protocol Display Module” in PS3.3 . This is a 1:M relationship, as multiple Display Sets may draw their images from the same Image Set. The filtering operation allows for selecting a subset of the Image Set and is defined by the Hanging Protocol Display Module Filter Operations Sequence. Reformatting allows operations such as multiplanar reformatting to resample images from a volume (Reformatting Operation Type, Reformatting Thickness, Reformatting Interval, Reformatting Operation Initial View Direction, 3D Rendering Type). The Hanging Protocol Display Module Sorting Operations Sequence allows for ordering of the images. Default presentation intent (a subset of the Presentation State operations such as intensity window default setting) is defined by the Hanging Protocol Display Module presentation intent Attributes. The Display Sets are containers holding the final sets of images after all operations have occurred. These sets contain the images ready for rendering to locations on the screen(s).
The rendering of a Display Set to the screen is determined by the layout information in the Image Boxes Sequence within a Display Sets Sequence Item in the Section C.23.3 “Hanging Protocol Display Module” in PS3.3 . A Display Set is mapped to a single Image Boxes Sequence. This is generally a single Image Box (rectangular area on screen), but may be an ordered set of image boxes. The mapping to an ordered set of image boxes is a special case to allow the images to flow in an ordered sequence through multiple locations on the screen (e.g., newspaper columns). Display Environment Spatial Position specifies rectangular locations on the screen where the images from the Display Sets will be rendered. The type of interaction to be used is defined by the Image Boxes Sequence Item Attributes. A vertically scrolling alternator could be specified by having Image Box Layout Type equal TILED and Image Box Scroll Direction equal VERTICAL.
An example of this processing is shown in Figure V.2-2. The figure is based on the Neurosurgery Planning Hanging Protocol Example contained in this Annex, and corresponds to the display sets for Display Set Presentation Group #1 (CT only display of current CT study).
Goal: A Hanging Protocol for Chest X-ray, PA & Lateral (LL, RL) views, current & prior, with the following layout:
The Hanging Protocol Definition does not specify a specific modality, but rather a specific anatomy (Chest). The Image Sets Sequence provides more detail, in that it specifies the modalities in addition to the anatomy for each image set.
Hanging Protocol Description: "Current and Prior Chest PA and Lateral"
Hanging Protocol Definition Sequence:
Item 1: (51185008, SCT, "Chest")
Item 1: (51185008, SCT, "Chest")
Hanging Protocol User Identification Code Sequence: zero length
Goal: A Hanging Protocol for MR & CT of Head, for a neurosurgery plan. 1Kx1K screen on left shows orthogonal MPR slices through the acquisition volume, and in one presentation group has a 3D interactive volume rendering in the lower right quadrant. In all display sets the 1Kx1K screen is split into 4 512x512 quadrants. The 2560x2048 screen has a 4 row by 3 column tiled display area. There are 4 temporal presentation groups: CTnew, MR, combined CTnew and MR, combined CTnew and CTold.
Display Environment Spatial Position Attribute values for image boxes are represented in terms of ratios in pixel space [(0/3072, 512/2560), (512/3072,0/2560)] rather than (0.0,0.0), (1.0,1.0) space, for ease of understanding the example.
Hanging Protocol Description: "Neurosurgery planning, requiring MR and CT of head"
Hanging Protocol Definition Sequence:
Item 1: (69536005, SCT, "Head")
Item 1: (69536005, SCT, "Head")
Hanging Protocol User Identification Code Sequence: zero length
Synchronized Scrolling Sequence: [Link up (synchronize) the MR and CT tiled scroll panes in Display Sets 15 and 16, and the CT new and CT old tiled scroll panes in Display Sets 21 and 22]
The following is an example of a general C-FIND Request for the Hanging Protocol Information Model - FIND SOP Class that is searching for all Chest related Hanging Protocols for the purpose of reading projection Chest X-ray. The user is at a workstation that has two 2Kx2.5K screens.
The following is an example of a set of C-FIND Responses for the Hanging Protocol Information Model - FIND SOP Class, answering the C-FIND Request listed above. There are a few matches for this general query. The application needs to select the best choice among the matches, which is the second response. The first response is for Chest CT, and the third response does not match the user's workstation environment as well as does the second.
For Display Set Patient Orientation (0072,0700) with value "A\F", the application interpreting the Hanging Protocol will arrange sagittal images oriented with the patient's anterior toward the right side of the image box, and the patient's foot will be toward the bottom of the image box. An incoming sagittal MRI image as shown in Figure V.6-1 will require a horizontal flip before display in the image box.
The scenarios in which Digital Signatures would be used in DICOM Structured Reports include, but are not limited to the following.
Case 1: Human Signed Report and Automatically Signed Evidence.
The archive, after receiving an MPPS complete and determining that it has the complete set of objects created during an acquisition procedure step, creates a signed Key Object Selection Document Instance with secure references to all of the DICOM composite objects that constitute the exam. The Document would include a Digital Signature according to the Basic SR Digital Signatures Secure Use Profile with the Digital Signature Purpose Code Sequence (0400,0401) of (14,ASTM-sigpurpose, "Source Signature"). It would set the Key Object Selection Document Title of that Instance to (113035, DCM, "Signed Complete Acquisition Content"). Note that the objects that are referenced in the MPPS may or may not have Digital Signatures. By creating the Key Object Selection Document Instance, the archive can in effect add the equivalent of Digital Signatures to the set of objects.
A post-processing system generates additional evidence objects, such as measurements or CAD reports, referring to objects in the exam. This post-processing system may or may not include Digital Signatures in the evidence objects, and may or may not be included as secure references in a signed Key Object Selection Document.
Working at a reporting station, a report author gathers evidences from a variety of sources, including those referenced by the Key Object Selection Document Instance and the additional evidence objects generated by the post-processing system, and incorporates his or her own observations and conclusions into one or more reports.
It is desired that all evidence references from a DICOM SR be secure. The application creating the SR may either:
create secure references by copying a verified Digital Signature from the referenced object or by generating a MAC code directly from the referenced object,.
make a secure reference to a signed Key Object Selection Document that in turn securely references the SOP Instances, or.
copy the secure reference information from a trusted Key Object Selection Document to avoid the overhead of recalculating the MAC codes or revalidating the reference Digital Signatures.
When the author completes a DICOM SR, the system, using the author's X.509 Digital Signature Certificate generates a Digital Signature with the Digital Signature Purpose Code Sequence (0400,0401) of (1, ASTM-sigpurpose, "Author Signature") for the report.
The author's supervisor reviews the DICOM SR. If the supervisor approves of the report, the system sets the Verification Flag to "VERIFIED" and adds a Digital Signature with the Digital Signature Purpose Code Sequence (0400,0401) of (5, ASTM-sigpurpose, "Verification Signature") or (6, ASTM-sigpurpose, "Validation Signature") using the supervisor's X.509 certificate.
At some later time, someone who is reading the DICOM SR SOP Instance wishes to verify its authenticity. The system would verify that the Author Signature, as well as any Verification or Validation Signature present are intact (i.e., that the signed data has not been altered based on the recorded Digital Signatures, and that the X.509 Certificates were valid at the time that the report was created).
If the report reader wishes to inspect DICOM source materials referenced in a DICOM SR, the system can insure that the materials have not been altered since the report was written by verifying the Referenced Digital Signatures or the Referenced SOP Instance MAC that the report creator generated from the referenced materials.
Case 2: Cross Enterprise Document Exchange
An application sends by any means a set of DICOM composite objects to an entity outside of the institutional environment (e.g., for review by a third party).
The application creates a signed Key Object Selection Document Instance with a Key Object Selection Document Title of (113031, DCM, "Signed Manifest") referencing the set of DICOM Data Objects that it sent outside the institutional environment, and sends that SR to the external entity as a shipping manifest.
The external entity may utilize the Key Object Selection SR SOP Instance to confirm that it received all of the referenced objects intact (i.e., without alterations). Because the signed Key Object Selection Instance must use secure references, it can verify that the objects have not been modified.
This Annex describes a use of Key Object Selection (KO) and Grayscale Softcopy Presentation State (GSPS) SOP Instances, in conjunction with a typical dictation/transcription process for creating an imaging clinical report. The result is a clinical report as a Basic Text Structured Report (SR) SOP Instance that includes annotated image references (see Section X.2). This report may also (or alternatively) be encoded as an HL7 Clinical Document Architecture (CDA) document (see Section X.3).
Similar but more complex processes that include, for instance, numeric measurements and Enhanced or Comprehensive SR, are not addressed by this Annex. This Annex also does not specifically address the special issues associated with reporting across multiple studies (e.g., the "grouped procedures" case).
During the softcopy reading of an imaging study, the physician dictates the report, which is sent to a transcription service or is processed by a voice recognition system. The transcribed dictation arrives at the report management system (typically a RIS) by some mechanism not specified here. The report management system enables the reporting physician to correct, verify, and "sign" the transcribed report. See Figure X.1-1. This data flow applies to reports stored in a proprietary format, reports stored as DICOM Basic Text SR SOP Instances, or reports stored as HL7 CDA instances.
The report management system has flexibility in encoding the report title. For example, it could be any of the following:
There are LOINC codes associated with each of these types of titles, if a coded title is used on the report (see CID 7000 “Diagnostic Imaging Report Document Title”).
The transcribed dictation may be either a single text stream, or a series of text sections each with a title. Division of reports into a limited number of canonically named sections may be done by the transcriptionist, or automated division of typical free text reports may be possible with voice recognition or a natural language processing algorithm.
For an electronically stored report, the signing function may or may not involve a cryptographic digital signature; any such cryptographic signature is beyond the scope of this description.
To augment the basic dictation/transcription reporting use case, it is desired to select significant images to be attached (by reference) to the report. During the softcopy reading, the physician may select images from those displayed on his workstation (e.g., by a point-and-click function through the user interface). The selection of images is conveyed to the image repository (PACS) through a DICOM Key Object Selection (KO) document. When the report management system receives the transcribed dictation, it queries the image repository for any KO documents, and appends the image references from the KO to the transcription. In this process step, the report management system does not need to access the referenced images; it only needs to copy the references into the draft report. The correction and signature function potentially allows the physician to retrieve and view the referenced images, correct and change text, and to delete individual image references. See Figure X.1-2.
The transcribed dictation must have associated with it sufficient key Attributes for the report management system to query for the appropriate KO documents in the image repository (e.g., Study ID, or Accession Number).
Each KO document in this process includes a specific title "For Report Attachment", a single optional descriptive text field, plus a list of image references using the SR Image Content Item format. The report management system may need to retrieve all KO documents of the study to find those with this title, since the image repository might not support the object title as a query return key.
Multiple KO instances may be created for a study report, e.g., to facilitate associating different descriptive text (included in the KO document) with different images or image sets. All KOs with the title "For Report Attachment" in the study are to be attached to the dictated report by copying their content into the draft report (see Section X.2 and Section X.3). (There may also be KOs with other titles, such as "For Teaching", that are not to be attached to the report.)
The nature of the image reference links will differ depending on the format of the report. A DICOM SR format report will use DICOM native references, and other formats may use a hyperlink to the referenced images using the Web Access to DICOM Persistent Objects (WADO) service (see PS3.18).
The KO also allows the referencing of a Grayscale Softcopy Presentation State (GSPS) instance for each selected image. A GSPS instance can be created by the workstation for annotation ("electronic grease pencil") of the selected image, as well as to set the window width/window level, rotation/flip, and/or display area selection of the image attached to the report. The created GSPS instances are transferred to the image repository (PACS) and are referenced in the KO document.
As with image references, the report management system may include the GSPS instance references in the report. When the report is subsequently displayed, the reader may retrieve the referenced images together with the referenced GSPS, so that the image is displayed with the annotations and other GSPS display controls. See Figure X.1-3.
Note that the GSPS display controls can also be included in WADO hyperlinks and invoked from non-DICOM display stations.
This section describes the use of transcribed dictation and Key Object Selection (KO) instances to produce a DICOM Basic Text SR instance. A specific SR Template, TID 2005 “Transcribed Diagnostic Imaging Report”, is defined to support transcribed diagnostic imaging reports created using this data flow.
The Attributes of the Patient and Study Modules will be identical to those of the Study being reported. The following information is encoded in the SR Document General Module:
Identity of the dictating physician (observer context) in the Author Sequence
Identity of the transcriptionist or transcribing device (voice recognition) in the Participant Sequence
Identity of the report signing physician in the Verifying Observer Sequence
Identity of the institution owning the report in the Custodial Organization Sequence
Linkages to the order and requested procedures in the Referenced Request Sequence
A list of all images in the study in the Current Requested Procedure Evidence Sequence (from MPPS SOP Instances of the Study, or from query of the image repository)
A list of all images not in the study, but also attached to the report as referenced significant images, in the Pertinent Other Evidence Sequence
The transcribed dictation is used to populate one or more section containers in the Content Tree of the SR Instance. If the transcription consists of a single undifferentiated text stream, it will typically be encoded using a single CONTAINER Content Item with Concept Name "Findings", and the text encoded as the value in a subsidiary TEXT Content Item with Concept Name "Finding". When the transcription is differentiated into multiple sections with captions, e.g., using the concepts in CID 7001 “Diagnostic Imaging Report Heading”, each section may be encoded in a separate CONTAINER, with the concept from CID 7001 “Diagnostic Imaging Report Heading” as the container Concept Name, and the corresponding term from CID 7002 “Diagnostic Imaging Report Element” as the Concept Name for a subsidiary TEXT Content Item. See Figure X.2-1.
The Content Items from each associated KO object will be included in the SR in a separate CONTAINER with Concept Name (121180, DCM, "Key Images"). The text item "Key Object Description" and all image reference items shall be copied from the KO Content Tree to the corresponding SR container. See Figure X.2-2.
The KO and SR IMAGE Content Item format allows the encoding of an icon (image thumbnail) with the image reference, as well as a reference to a GSPS instance controlling image presentation. Whether or not to include icons or GSPS references is an implementation decision of the softcopy review station that creates the KO; the IMAGE Content Item as a whole may be simply copied by the report management system from the KO to the Basic Text SR instance.
The intended process is that all KOs "For Report Attachment" are to be automatically included in the draft report. Therefore, the correction and signature function of the report management system should allow the physician to delete image references that were included, perhaps unintentionally, by the automatic process.
This section describes the use of transcribed dictation and Key Object Selection (KO) documents to produce an HL7 Clinical Document Architecture (CDA) Release 2 document.
While this section describes encoding as CDA Release 2, notes are provided about encoding issues for CDA Release 1.
The header of the CDA instance includes:
Identity of the requested procedure ("documentationOf" act relationship)
Identity of the dictating physician ("author" participation)
Identity of the transcriptionist ("dataEnterer" participation)
Identity of the report signing physician ("legalAuthenticator" participation)
Identity of the institution owning the report ("custodian" participation)
Identity of the request/order ("inFulfillmentOf" act relationship)
Each transcription section can be encoded in a Section in the CDA document. The Section.Code and/or Section.Title can be derived from the corresponding transcription section title, if any. Although the transcription text can be encoded in the Section.Text without further markup, it is recommended that it be enclosed in <paragraph> tags.
Images are referenced using hypertext links in the narrative text. These links in CDA are not considered part of the attested content.
The primary use case for this Annex is the dictation/transcription reporting model. In the historical context of that model, the images (film sheets) are usually not considered part of the attested content of the report, although they are part of the complete exam record. I.e., the report is clinically complete without the images, and the referenced images are not formally part of the report. Therefore, this Annex discusses only the use of image references, not images embedded in the report.
Being part of the attested content would require the images to be displayed every time the report is displayed - i.e., they are integral to understanding the report. If the images are attested, they must also be encapsulated with the CDA package. I.e., the CDA document itself is only one part of the interchanged package; the referenced images must also always be sent with the CDA document. If the images are for reference only and not attested, the Image Content Item may be transformed to a simple hypertext link; it is then the responsibility of CDA document receiver to follow or not follow the hyperlink. Moreover, as the industry moves toward ubiquitous network access to a distributed electronic healthcare record, there will be less need to prepackage the referenced images with the report.
In the current use case, there will be one or more KO instances with image references. Each KO instance can be transformed to a Section in the CDA document with a Section.Title "Key Images", and a Section.Code of 121180 from the DICOM Controlled Terminology (see PS3.16). If the KO includes a TEXT Content Item, it can be transformed to <paragraph> data in that Section.Text of the CDA document. Each IMAGE Content Item can be transformed to a link item using the <linkHtml> markup.
Within the <linkHtml> markup, the value of the href Attribute is the DICOM object reference as a Web Access to Persistent DICOM Objects (WADO) specified URI (see Table X.3-1).
When a DICOM object reference is included in an HL7 CDA document, it is presumed the recipient would not be a DICOM application; it would have access only to general Internet network protocols (and not the DICOM upper layer protocol), and would not be configured with the means to display a native DICOM image. Therefore, the recommended encoding of a DICOM Object Reference in the CDA narrative block <linkHtml> uses WADO for access by the HTTP/HTTPS network protocol (see PS3.18), using one of the formats broadly supported in Web browsers (image/jpeg or video/mpeg) as the requested content type.
In CDA Release 1, the markup tag for hyperlinks is <link_html> within the scope of a <link> tag.
Table X.3-1. WADO Reference in an HL7 CDA <linkHtml>
Literal strings are in normal typeface, while <italic typeface within angle brackets> indicates values to be copied from the identified source.
The default contentType for single frame images is image/jpeg, which does not need to be specified as a WADO component. However, the default contentType for multiple frame images is application/dicom, which needs to be overridden with the specific request for video/mpeg.
There is not yet a standard mechanism for minimizing the potential for staleness of the <scheme>://<authority>/<path>component.
If the IMAGE Content Item includes an Icon Image Sequence, the report creation process may embed the icon in the Section.Text narrative. The Icon Image Sequence Pixel Data is converted into an image file, e.g., in JPEG or GIF format, and base64 encoded. The file is encoded in an ObservationMedia entry in the CDA instance, and a <renderMultimedia> tag reference to the entry is encoded in the Section.Text adjacent to the <linkHtml> of the image reference.
The Current Requested Procedure Evidence Sequence (0040,A375) of the KO instance lists all the SOP Instances referenced in the IMAGE Content Items in their hierarchical Study/Series/Instance context. It is recommended that this list be transcoded to CDA Entries in a Section with Section.Title "DICOM Object Catalog" and a Section.Code of 121181 from the DICOM Controlled Terminology (see PS3.16).
Since the image hypertext links in the Section narrative may refer to both an image and a softcopy presentation state, as well as possibly being constrained to specific frame numbers, in general there is not a simple mapping from the <linkHtml> to an entry. Therefore it is not expected that there would be ID reference links between the <linkHtml> and related entries.
The purpose of the Structured Entries is to allow DICOM-aware applications to access the referenced images in their hierarchical context.
The encoding of the DICOM Object References in CDA Entries is shown in Figure X.3-1 and Tables X.3-2 through X.3-6. All of the mandatory data elements for the Entries are available in the Current Requested Procedure Evidence Sequence; optional elements (e.g., instance DateTimes) may also be included if known by the encoding application.
The format of Figure X.3-1 follows the conventions of HL7 v3 Reference Information Model diagrams.
Table X.3-2. DICOM Study Reference in an HL7 V3 Act (CDA Act Entry)
Table X.3-3. DICOM Series Reference in an HL7 V3 Act (CDA Act Entry)
<Series Instance UID (0020,000E) as root property with no extension property > |
|||
1.2.840.10008.2.16.4 as codeSystem property, DCM as codeSystemName property, "DICOM Series" as displayName property, Modality as qualifier property (see text and Table X.3-4) > |
|||
The code for the Act representing a Series uses a qualifier property to indicate the modality. The qualifier property is a list of coded name/value pairs. For this use, only a single list entry is used, as described in Table X.3-4.
Table X.3-4. Modality Qualifier for The Series Act.Code
1.2.840.10008.2.16.4 as codeSystem property, |
||
< Modality (0008,0060) as code property, 1.2.840.10008.2.16.4 as codeSystem property, DCM as codeSystemName property, Modality code meaning (from PS3.16) as displayName property> |
Table X.3-5. DICOM Composite Object Reference in an HL7 V3 Act (CDA Observation Entry)
< SOP Instance UID (0008,0018) as root property with no extension property> |
|||
< SOP Class UID (0008,0016) as code property, 1.2.840.10008.2.6.1 as codeSystem property, DCMUID as codeSystemName property, SOP Class UID Name (from PS3.6) as displayName property> |
|||
<application/DICOM as mediaType property, WADO reference (see Table X.3-6) as reference property> |
|||
Table X.3-6. WADO Reference in an HL7 DGIMG Observation.Text
An application that receives a CDA with image references, and is capable of using the full services of DICOM upper layer protocol directly, can use the WADO parameters in either the linkHtml or in the DGIMG Observation.Text to retrieve the object using the DICOM network services. Such an application would need to be pre-configured with the hostname/IP address, TCP port, and AE Title of the DICOM object server (C-MOVE or C-GET SCP); this network address is not part of the WADO string. (Note that pre-configuration of this network address is typical for DICOM applications, and is facilitated by the LDAP-based DICOM Application Configuration Management Profile; see PS3.15.)
The application would open a Query/Retrieve Service Association with the configured server, and send a C-MOVE or C-GET command using the study, series, and object instance UIDs identified in the WADO query parameters. Such an application might also reasonably query the server for related objects, such as Grayscale Softcopy Presentation State.
When using the C-GET service, the retrieving application needs to specify and negotiate the SOP Class of the retrieved objects when it opens the Association. This information is not available in the linkHtml WADO reference; however, it is available in the DGIMG Observation.Code. It may also be obtained from the configured server using a C-FIND query on a prior Association.
The report may be created as both an SR instance and a CDA instance. In this case, the two instances are equivalent, and can cross-reference each other.
The CDA Document shall contain clinical content equivalent to the SR Document.
The HL7 CDA standard specifically addresses transformation of documents from a non-CDA format. The requirement in the CDA specification is: "A proper transformation must ensure that the human readable clinical content of the report is not impacted."
There is no requirement that the transform or transcoding between DICOM SR and HL7 CDA be reversible. In particular, some Attributes of the DICOM Patient, Study, and Series IEs have no corresponding standard encoding in the HL7 CDA Header, and vice versa. Such data elements, if transcoded, may need to be encoded in "local markup" (in HL7 CDA) or private data elements (in DICOM SR) in an implementation-dependent manner; and some such data elements may not be transcoded at all. It is a responsibility of the transforming application to ensure clinical equivalence.
Many Attributes of the SR Document General Module can be transcoded to CDA Header participations or related acts.
Due to the inherent differences between DICOM SR and HL7 CDA, a transcoded document will have a different UID than the source document. However, the SR Document may reference the CDA Document as equivalent using the Equivalent CDA Document Sequence (0040,A090) Attribute, and the CDA Document may reference the SR Document with a relatedDocument act relationship.
Since the ParentDocument target of the relatedDocument relationship is constrained to be a simple DOCCLIN act, it is recommended that the reference to the DICOM SR be encoded per Table X.3-4, without explicit identification of the Study and Series Instance UIDs, and with classCode DOCCLIN (rather than DGIMG).
Digital projection X-ray images typically have a very high dynamic range due to the digital detector's performance. In order to display these images, various Values Of Interest (VOI) transformations can be applied to the images to facilitate diagnostic interpretation. The original description of the DICOM grayscale pipeline assumed that either the parameters of a linear LUT (window center and width) are used, or a static non-linear LUT is applied (VOI LUT).
Normally, a display application interprets the window center and width as parameters of a function following a linear law (see Figure Y-1).
A VOI LUT sequence can be provided to describe a non-linear LUT as a table of values, with the limitation that the parameters of this LUT cannot be adjusted subsequently, unless the application provides the ability to scale the output of the LUT (and there is no way in DICOM to save such a change unless a new scaled LUT is built), or to fit a curve to the LUT data, which may then be difficult to parametrize or adjust, or be a poor fit.
Digital X-ray applications all have their counterpart in conventional film/screen X-ray and a critical requirement for such applications is to have an image "look" close to the film/screen applications. In the film/screen world the image dynamics are mainly driven by the H-D curve of the film that is the plot of the resulting optical density (OD) of the film with respect to the logarithm of the exposure. The typical appearance of an H-D curve is illustrated in Figure Y-2.
In digital applications, a straightforward way to mock up a film-like look would be to use a VOI LUT that has a similar shape to an H-D curve, namely a toe, a linear part and a shoulder instead of a linear ramp.
While such a curve could be encoded as data within a VOI LUT, DICOM defines an alternative for interpreting the existing window center and width parameters, as the parameters of a non-linear function.
Figure Y-3 illustrates the shape of a typical sigmoid as well as the graphical interpretation of the two LUT parameters window center and window width. This figure corresponds to the equation definition in PS3.3 for the VOI LUT Function (0028,1056) is SIGMOID.
If a receiving display application does not support the SIGMOID VOI LUT Function, then it can successfully apply the same window center and window width parameters to a linear ramp and achieve acceptable results, specifically a similar perceived contrast but without the roll-off at the shoulder and toe.
A receiving display application that does support such a function is then able to allow the user to adjust the window center and window width with a more acceptable resulting appearance.
The Isocenter Reference System Attributes describe the 3D geometry of the X-Ray equipment composed by the X-Ray positioner and the X-Ray table.
These Attributes define three coordinate systems in the 3D space:
The Isocenter Reference System Attributes describe the relationship between the 3D coordinates of a point in the table coordinate system and the 3D coordinates of such point in the positioner coordinate system (both systems moving in the equipment), by using the Isocenter coordinate system that is fixed in the equipment.
Any point of the Positioner coordinate system (PXp, PYp, PZp) can be expressed in the Isocenter coordinate system (PX, PY, PZ) by applying the following transformation:
And inversely, any point of the Isocenter coordinate system (P X , P Y , P Z ) can be expressed in the Positioner coordinate system (P Xp , P Yp , P Zp ) by applying the following transformation:
Where R1, R2 and R3 are defined as follows:
Any point of the table coordinate system (PXt, PYt, PZt) (see Figure Z-1) can be expressed in the Isocenter Reference coordinate system (PX, PY, PZ) by applying the following transformation:
And inversely, any point of the Isocenter coordinate system (PX, PY, PZ) can be expressed in the table coordinate system (PXt, PYt, PZt) by applying the following transformation:
Where R1, R2 and R3 are defined as follows:
This Annex describes the use of the X-Ray Radiation Dose SR Object. Multiple systems contributing to patient care during a visit may expose the patient to irradiation during diagnostic and/ or interventional procedures. Each of those equipments may record the dose in an X-Ray Dose Reporting information object. Radiation safety information reporting systems may take advantage of this information and create dose reports for a visit, parts of a procedure performed or accumulation for the patient in total, if information is completely available in a structured content.
An irradiation event is the loading of X-Ray equipment caused by a single continuous actuation of the equipment's irradiation switch, from the start of the loading time of the first pulse until the loading time trailing edge of the final pulse. The irradiation event is the "smallest" information entity to be recorded in the realm of Radiation Dose reporting. Individual Irradiation Events are described by a set of accompanying physical parameters that are sufficient to understand the "quality" of irradiation that is being applied. This set of parameters may be different for the various types of equipment that are able to create irradiation events. Any on-off switching of the irradiation source during the event is not treated as separate events, rather the event includes the time between start and stop of irradiation as triggered by the user. E.g., a pulsed fluoro X-Ray acquisition is treated as a single irradiation event.
Irradiation events include all exposures performed on X-Ray equipment, independent of whether a DICOM Image Object is being created. That is why an irradiation event needs to be described with sufficient Attributes to exchange the physical nature of irradiation applied.
Accumulated Dose Values describe the integrated results of performing multiple irradiation events. The scope of accumulation is typically a study or a performed procedure step. Multiple Radiation Dose objects may be created for one Study or one Radiation Dose object may be created for multiple performed procedures.
The following use cases illustrate the information flow between participating roles and the possible capabilities of the equipment that is performing in those roles. Each case will include a use case diagram and denote the integration requirements. The diagrams will denote actors (persons in role or other systems involved in the process of data handling and/or storage). Furthermore, in certain cases it is assumed that the equipment (e.g., Acquisition Modality) is capable of displaying the contents of any dose reports it creates.
These use cases are only examples of possible uses for the Dose Report, and are by no means exhaustive.
This is the basic use case for electronic dose reporting. See Figure AA.3-1.
In this use case the user sets up the Acquisition Modality, and performs the study. The Modality captures the irradiation event exposure information, and encodes it together with the accumulated values in a Dose Report. The Modality may allow the user to review the dose report, and to add comments. The acquired images and Dose Report are sent to a Long-Term Storage system (e.g., PACS) that is capable of storing Dose Report objects.
A Display Station may retrieve the Dose Report from the Storage system, and display it. Because the X-Ray Radiation Dose SR object is a proper subset of the Enhanced SR object, the Display Station may render it using the same functionality as used for displaying any Enhanced SR object.
The Dose Report, by manual data entry, may also be used for image acquisitions using non-digital Acquisition Modalities. See Figure AA.3-2.
In this use case the user may manually enter the irradiation event exposure information into a Dose Reporting Station, possibly transcribing it from a dosimeter read-out display. The station encodes the data in a Dose Report and sends it to a Storage system. The same Dose Reporting Station may be used to support several acquisition modalities.
This case may be useful in radiography environments with legacy systems not being able to provide DICOM functions, where the DICOM X-Ray Radiation Dose SR Object provides a standard format for recording and storing irradiation events.
Note that in a non-PACS environment, the Dose Reports may be sent to a Long-Term Storage function built into a Radiation Safety workstation or information system.
A specialized Radiation Safety workstation the may contribute to the process of dose reporting in terms of more elaborate calculations or graphical dose data displays, or by aggregating dose data over multiple studies. See Figure AA.3-3. The Radiation Safety workstation may or may not be integrated with the Long-Term Storage function in a single system; such application entity architectural decisions are outside the scope of DICOM, but DICOM services and information objects do facilitate a variety of possible architectures.
The Radiation Safety workstation may be able to create specific reports to respond to dose registry requirements, as established by local regulatory authorities. These reports would generally not be in DICOM format, but would be generated from the data in DICOM X-Ray Radiation Dose SR objects.
Other purposes of the Radiation Safety workstation may include statistical analyses over all Dose Report Objects in order to gain information for educational or quality control purposes. This may include searches for Reports performed in certain time ranges, or with specific equipment, or using certain protocols.
This section was previously defined the DICOM Standard but has been retired. See PS3.17-2021b.
Dose Reporting workflow is described in the IHE Radiology Radiation Exposure Monitoring (REM) Integration Profile.
This example of a Print Management SCU Session is provided for informational purposes only. It illustrates the use of one of the Basic Print Management Meta SOP Classes.
Example BB.1-1. Simple Example of Print Management SCU Session
This Section and its sub-sections contain examples of ways in which the Storage Commitment Service Class could be used. This is not meant to be an exhaustive set of scenarios but rather a set of examples.
Figure CC.1-1 is an example of the use of the Storage Commitment Push Model SOP Class.
Node A (an SCU) uses the services of the Storage Service Class to transmit one or more SOP Instances to Node B (1). Node A then issues an N-ACTION to Node B (an SCP) containing a list of references to SOP Instances, requesting that the SCP take responsibility for storage commitment of the SOP Instances (2). If the SCP has determined that all SOP Instances exist and that it has successfully completed storage commitment for the set of SOP Instances, it issues an N-EVENT-REPORT with the status successful (3) and a list of the stored SOP Instances. Node A now knows that Node B has accepted the commitment to store the SOP Instances. Node A might decide that it is now appropriate for it to delete its copies of the SOP Instances. The N-EVENT-REPORT may or may not occur on the same Association as the N-ACTION.
If the SCP determines that committed storage can for some reason not be provided for one or more SOP Instances referenced by the N-ACTION request, then instead of reporting success it would issue an N-EVENT-REPORT with a status of completed - failures exists. With the EVENT-REPORT it would include a list of the SOP Instances that were successfully stored and also a list of the SOP Instances for which storage failed.
Figure CC.1-3 explains the use of the Retrieve AE Title. Using the push model a set of SOP Instances will be transferred from the SCU to the SCP. The SCP may decide to store the data locally or, alternatively, may decide to store the data at a remote location. This example illustrates how to handle the latter case.
Node A, an SCU of the Storage Commitment Push Model SOP Class, informs Node B, an SCP of the corresponding SOP Class, of its wish for storage commitment by issuing an N-ACTION containing a list of references to SOP Instances (1). The SOP Instances will already have been transferred from Node A to Node B (Push Model) (2). If the SCP has determined that storage commitment has been achieved for all SOP Instances at Node C specified in the original Storage Commitment Request (from Node A), it issues an N-EVENT-REPORT (3) like in the previous examples. However, to inform the SCU about the address of the location at which the data will be stored, the SCP includes in the N-EVENT-REPORT the Application Entity Title of Node C.
The Retrieve AE Title can be included in the N-EVENT-REPORT at two different levels. If all the SOP Instances in question were stored at Node C, a single Retrieve AE Title could be used for the whole collection of data. However, the SCP could also choose not to store all the SOP Instances at the same location. In this case the Retrieve AE Title Attribute must be provided at the level of each single SOP Instance in the Referenced SOP Instance Sequence.
This example also applies to the situation where the SCP decides to store the SOP Instances on Storage Media. Instead of providing the Retrieve AE Title, the SCP will then provide a pair of Storage Media File-Set ID and UID.
Figure CC.1-4 is an example of how to use the Push Model with Storage Media to perform the actual transfer of the SOP Instances.
Node A (an SCU) starts out by transferring the SOP Instances for which committed storage is required to Node B (an SCP) by off-line means on some kind of Storage Media (1). When the data is believed to have arrived at Node B, Node A can issue an N-ACTION to Node B containing a list of references to the SOP Instances contained on the Storage Media, requesting that the SCP perform storage commitment of these SOP Instances (2). If the SCP has determined that all the referenced SOP Instances exist (they may already have been loaded into the system or they may still reside on the Storage Media) and that it has successfully completed storage commitment for the SOP Instances, it issues an N-EVENT-REPORT with the status successful (3) and a list of the stored SOP Instances like in the previous examples.
If the Storage Media has not yet arrived or if the SCP determines that committed storage can for some other reason not be provided for one or more SOP Instances referenced by the N-ACTION request it would issue an N-EVENT-REPORT with a status of completed - failures exists. With the EVENT-REPORT it would include a list of the SOP Instances that were successfully stored and also a list of the SOP Instances for which storage failed. The SCP is not required to wait for the Storage Media to arrive (however it may chose to wait) but is free to reject the Storage Commitment request immediately. If so, the SCU may decide to reissue another N-ACTION at a later point in time.
These typical examples of Modality Worklists are provided for informational purposes only.
A Worklist consisting of Scheduled Procedure Step entities that have been scheduled for a certain time period (e.g., "August 9, 1995"), and for a certain Scheduled Station AE title (namely the modality, where the Scheduled Procedure Step is going to be performed). See Figure DD.1-1.
A Worklist consisting of the Scheduled Procedure Step entities that have been scheduled for a certain time period (e.g., "August 9, 1995"), and for a certain Modality type (e.g., CT machines). This is a scenario, where scheduling is related to a pool of modality resources, and not for a single resource.
A Worklist consisting of the Scheduled Procedure Step entities that have been scheduled for a certain time period (e.g., "August 9, 1995"), and for a certain Scheduled Performing Physician. This is a scenario, where scheduling is related to human resources and not for equipment resources.
A Worklist consisting of a single Scheduled Procedure Step entity that has been scheduled for a specific Patient. In this scenario, the selection of the Scheduled Procedure Step was done beforehand at the modality. The rationale to retrieve this specific worklist is to convey the most accurate and up-to-date information from the IS, right before the Procedure Step is performed.
The Modality Worklist SOP Class User may retrieve additional Attributes. This may be achieved by Services outside the scope of the Modality Worklist SOP Class.
The following is a simple and non-comprehensive example of a C-FIND Request for the Relevant Patient Information Query Service Class, specifically for the Breast Imaging Relevant Patient Information Query SOP Class, requesting a specific Patient ID, and requiring that any matching response be structured in the form of TID 9000 “Relevant Patient Information for Breast Imaging”.
The following is a simple and non-comprehensive example of a C-FIND Response for the Relevant Patient Information Query Service Class, answering the C-FIND Request listed above, and structured in the form of TID 9000 “Relevant Patient Information for Breast Imaging” as required by the Affected SOP Class.
The following is a simple, non-comprehensive illustration of a report for a morphological examination with stenosis findings.
Example FF.3-1. Presentation of Report Example #1
The JPIP Referenced Pixel Data transfer syntaxes allow transfer of image objects with a reference to a non-DICOM network service that provides the pixel data rather than encoding the pixel data in (7FE0,0010).
The use cases for this extension to the Standard relate to an application's desire to gain access to a portion of DICOM pixel data without the need to wait for reception of all the pixel data. Examples are:
Stack Navigation of a large CT Study.
In this case, it is desirable to quickly scroll through this large set of data at a lower resolution and once the anatomy of interest is located the full resolution data is presented. Initially lower resolution images are requested from the server for the purpose of stack navigation. Once a specific image is identified the system requests the rest of the detail from the server.
Large Single Image Navigation.
In cases such as microscopy, very large images may be generated. It is undesirable to wait for the complete pixel data to be loaded when only a small portion of the specific image is of interest. Additionally, this large image may exceed the display capabilities thus resulting in a decimation of the image when displayed. A lower resolution image (i.e., one that matches the resolution of the display) is all that is required, as additional data cannot be fully rendered. Once an area of interest is determined, the application can pan and zoom to this area and request additional detail to fill the screen resolution.
It is desirable to generate thumbnail representations for a study. This has been accomplished through various means, many of which require the client to receive the complete pixel data from the server to generate the thumbnail image. This uses significant network bandwidth.
The thumbnails can be considered low-resolution representations of the image. The application can request a low-resolution representation of the image for use as a thumbnail.
Multi-frame Images may encode multiple dimensions. It is desirable for an application to access only the specific frames of interest in a particular dimension without the need to receive the complete pixel data. By using the multi-dimensional description, applications using the JPIP protocol may request frames of the Multi-frame Image.
The association negotiation between the initiator and acceptor controls when this method of transfer is used. An acceptor can potentially accept both the JPIP Referenced Pixel Data transfer syntax and a non-JPIP transfer syntax on different presentation contexts. When an acceptor accepts both of these transfer syntaxes, the initiator chooses the presentation context.
AE1 and AE2 both support both a JPIP Referenced Pixel Data Transfer Syntax and a non-JPIP Transfer Syntax
AE2 proposes two presentation contexts to AE1, one for with a JPIP Referenced Pixel Data Transfer Syntax, and the other with a non-JPIP Transfer Syntax
AE2 may choose either presentation context to send the object
AE1 must be able to either receive the pixel data in the C-STORE message, or to be able to obtain it from the provider URL
AE1 supports only the JPIP Referenced Pixel Data Transfer Syntax
AE2 supports both a JPIP Referenced Pixel Data Transfer Syntax and a non-JPIP Transfer Syntax
AE1 accepts only the presentation context with the JPIP Referenced Pixel Data Transfer Syntax, or only the JPIP Referenced Pixel Data Transfer Syntax within the single presentation context proposed
AE2 sends the object with the JPIP Referenced Pixel Data Transfer Syntax
AE1 must be able to either retrieve the pixel data from the provider URL
AE1 and AE2 both support both a JPIP Referenced Pixel Data Transfer Syntax and a non-JPIP Transfer Syntax
In addition to the C-GET presentation context, AE2 proposes to AE1 two presentation contexts for storage sub-operations, one for with a JPIP Referenced Pixel Data Transfer Syntax, and the other with a non-JPIP Transfer Syntax
AE2 may choose either presentation context to send the object
AE1 must be able to either receive the pixel data in the C-STORE message, or to be able to obtain it from the provider URL
AE1 supports only the JPIP Referenced Pixel Data Transfer Syntax
AE2 supports both a JPIP Referenced Pixel Data Transfer Syntax and a non-JPIP Transfer Syntax
In addition to the C-GET presentation context, AE2 proposes to AE1 a single presentation context for storage sub-operations with a JPIP Referenced Pixel Data Transfer Syntax
AE2 sends the object with the JPIP Referenced Pixel Data Transfer Syntax
AE1 must be able to either retrieve the pixel data from the provider URL
Figure HH-1 depicts an example of how the data is organized within an instance of the Segmentation IOD. Each item in the Segment Sequence provides the Attributes of a segment. The source image used in all segmentations is referenced in the Shared Functional Groups Sequence. Each item of the Per-Frame Functional Groups Sequence maps a frame to a segment. The Pixel Data classifies the corresponding pixels/voxels of the source Image.
Bar coding or RFID tagging of contrast agents, drugs, and devices can facilitate the provision of critical information to the imaging modality, such as the active ingredient, concentration, etc. The Product Characteristics Query SOP Class allows a modality to submit the product bar code (or RFID tag) to an SCP to look up the product type, active substance, size/quantity, or other parameters of the product.
This product information can be included in appropriate Attributes of the Contrast/Bolus, Device, or Intervention Modules of the Composite SOP Instances created by the modality. The product information then provides key acquisition context data necessary for the proper interpretation of the SOP Instances.
This annex provides informative information about mapping from the Product Characteristics Module Attributes of the Product Characteristics Query to the Attributes of Composite IODs included in several Modules.
Within this section, if no Product Characteristics Module source for the Attribute value is provided, the modality would need to provide local data entry or user selection from a pick list to fill in appropriate values. Some values may need to be calculated based on user-performed dilution of the product at the time of administration.
Table II-1. Contrast/Bolus Module Attribute Mapping
>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
Product Type Code Sequence (0044,0007) >'Code Sequence Macro' |
|
>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
||
>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
||
If contrast is administered without dilution, and using full contents of dispensed product: Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A) where:
|
||
If contrast is administered using full contents of dispensed product: Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
Product Parameter Sequence (0044,0013) > Concept Code Sequence (0040,A168) > Code Meaning (0008,0104), where:
|
||
If contrast is administered without dilution: Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
Table II-2. Enhanced Contrast/Bolus Module Attribute Mapping
>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
Product Type Code Sequence (0044,0007) > 'Code Sequence Macro' |
|
>>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
||
>>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
Product Parameter Sequence (0044,0013) > Concept Code Sequence (0040,A168), where:
|
|
If contrast is administered without dilution, and using full contents of dispensed product: Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
If contrast is administered without dilution: Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
Product Parameter Sequence (0044,0013) > Concept Code Sequence (0040,A168) > Code Meaning (0008,0104), where:
|
||
If contrast is administered without dilution, and using full contents of dispensed product: Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
Table II-3. Device Module Attribute Mapping
>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
Product Type Code Sequence (0044,0007) > 'Code Sequence Macro' |
|
Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
Product Parameter Sequence (0044,0013) > Measurement Units Code Sequence (0040,08EA) > Code Meaning (0008,0104), where:
|
||
Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
Product Parameter Sequence (0044,0013) > Numeric Value (0040,A30A), where:
|
||
Product Name (0044,0008) and/or Product Description (0044,0009) |
Table II-4. Intervention Module Attribute Mapping
>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
||
>>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
Product Type Code Sequence (0044,0007) > 'Code Sequence Macro' |
|
>>Include Table 8.8-1 “Code Sequence Macro Attributes” in PS3.3 |
||
For a general introduction into the underlying principles used in the Section C.27.1 “Surface Mesh Module” in PS3.3 see:
Foley & van Dam [et al], Computer Graphics: Principles and Practice, Second Edition, Addison-Wesley, 1990.
The dimensionality of the Vectors Macro (Section C.27.3 in PS3.3 ) is not restricted to accommodate broader use of this macro in the future. Usage beyond 3-dimensional Euclidean geometry is possible The Vectors Macro may be used to represent any multi-dimensional numerical entity, like a set of parameters that are assigned to a voxel in an image or a primitive in a surface mesh.
In electroanatomical mapping, one or more tracked catheters are used to sample the electrophysiological parameters of the inner surface of the heart. Using magnetic tracking information, a set of vertices is generated according to the positions the catheter was moved to during the examination. In addition to its 3D spatial position each vertex is loaded with a 7D-Vector containing the time it was measured at, the direction the catheter pointed to, the maximal potential measured in that point, the duration of that potential and the point in time (relative to the heart cycle) the potential was measured.
For biomechanical simulation the mechanical properties of a vertex or voxel can be represented with a n-dimensional vector.
The following example demonstrates the usage of the Surface Mesh Module for a tetrahedron.
4 triplets. The points are marked a,b,c,d in Figure JJ.2-1. |
|||
The second triangle is the one marked green in Figure JJ.2-1. |
|||
The use cases fall into five broad groups:
A referring physician receives radiological diagnostic reports on CT or MRI examinations. These reports contain references to specific images. He chooses to review these specific images himself and/or show the patient. The references in the report point to particular slices. If the slices are individual images, then they may be obtained individually. If the slices are part of an enhanced multi-frame CT/MR object, then retrieval of the whole multi-frame object might take too long. The Composite Instance Root Retrieve Service allows retrieval of only the selected frames.
The source of the image and frame references in the report could be KOS, CDA, SR, presentation states or other sources.
Selective retrieval can also be used to retrieve 2 or more arbitrary frames, as may be used for digital subtraction (masking), and may be used with any multi-frame objects, including multi-frame ultrasound, XR etc.
Features of interest in many long "video" examinations (e.g., endoscopy) are commonly referenced as times from the start of the examination. The same benefits of reduced WAN bandwidth use could be obtained by shortening the MPEG-2, MPEG-4 AVC/H.264, HEVC/H.265 or JPEG 2000 Part 2 Multi-component based stream prior to transmission.
There are times when it would be useful to retrieve from a Multi-frame Image only those frames satisfying certain dimensionality criteria, such as those CT slices fitting within a chosen volume. Initial retrieval of the image using the Composite Instance Retrieve Without Bulk Data Retrieve Service allows determination and retrieval of a suitable sub-set of frames.
Given the massively enhanced amount of dimensional information in the new CT/MR objects, applications could be developed that would use this for statistical purposes without needing to fetch the whole (correspondingly large) pixel data. The Composite Instance Retrieve Without Bulk Data Retrieve Service permits this.
There are many modules in DICOM that use the Image SOP Instance Reference Macro (Table 10-3 “Image SOP Instance Reference Macro Attributes” in PS3.3 ), which includes the SOP Instance UID and SOP class UID, but not the Series Instance UID and Study Instance UID. Using the Composite Instance Root Retrieval Classes however, retrieval of such instances is simple, as a direct retrieval may be requested, including only the SOP Instance UID in the Identifier of the C-GET request.
Where the frames to be retrieved and viewed are known in advance, - e.g., when they are referenced by an Image Reference Macro in a structured report, then they may be retrieved directly using either of the Composite Instance Root Retrieval Classes.
If the image has been stored in MPEG-2, MPEG-4 AVC/H.264 or HEVC/H.265 format, and if the SCU has knowledge independent of DICOM as to which section of a "video" is required for viewing (e.g., perhaps notes from an endoscopy) then the SCU can perform the following steps:
Use known configuration information to identify the available transfer syntaxes.
If MPEG-2, MPEG-4 AVC/H.264, HEVC/H.265 or JPEG 2000 Part 2 Multi-component transfer syntaxes are available, then issue a request to retrieve the required section.
The data received may be slightly longer than that requested, depending on the position of key frames in the data.
If only other transfer syntaxes are available, then the SCU may need to retrieve most of the object using Composite Instance Retrieve Without Bulk Data Retrieve Service to find the frame rate or frame time vector, and then calculate a list of frames to retrieve as in the previous sections.
The purpose of this annex is to aid those developing SCPs of the Composite Instance Root Retrieve Service Class. The behavior of the application when making any of the changes discussed in this annex should be documented in the conformance statement of the application.
There are many different aspects to consider when extracting frames to make a new object, to ensure that the new image remains a fully valid SOP Instance, and the following is a non-exhaustive list of important issues
Any Attributes that refer to start and end times such as Acquisition Time (0008,0032) and Content Time (0008,0033) must be updated to reflect the new start time if the first frame is not the same as the original. This is typically the case where the multi-frame object is a "video" and where the first frame is not included. Likewise, Image Trigger Delay (0018,1067) may need to be updated.
The Frame Time (0018,1063) may need to be modified if frames in the new image are not a simple contiguous sequence from the original, and if they are irregular, then the Frame Time Vector (0018,1065) will need to be used in its place, with a corresponding change to the Frame Increment Pointer (0028,0009). This also needs careful consideration if non-consecutive frames are requested from an image with non-linearly spaced frames.
Identifying the location of the requested frames within an MPEG-2, MPEG-4 AVC/H.264 or HEVC/H.265 data stream is non-trivial, but if achieved, then little else other than changes to the starting times are likely to be required for MPEG-2, MPEG-4 AVC/H.264 or HEVC/H.265 encoded data, as the use-cases for such encoded data (e.g., endoscopy) are unlikely to include explicit frame related data. See the note below however for comments on "single-frame" results.
An application holding data in MPEG-2, MPEG-4 AVC/H.264 or HEVC/H.265 format is unlikely to be able to create a range with a frame increment of greater than one (a calculated frame list with a 3rd value greater than one), and if such a request is made, it might return a status of AA02: Unable to extract Frames.
The approximation feature of the Time Range form of request is especially suitable for data held in MPEG-2, MPEG-4 AVC/H.264 or HEVC/H.265 form, as it allows the application to find the nearest surrounding key frames, which greatly simplifies editing and improves quality.
Similar issues exist as for MPEG-2, MPEG-4 AVC/H.264 and HEVC/H.265 data and similar solutions apply.
It is very important that functional groups for enhanced image objects are properly re-created to reflect the reduced set of frames, as they include important clinical information. The requirement in the Standard that the resulting object be a valid SOP instance does make such re-creations mandatory.
Images of the Nuclear Medicine SOP class are described by the Frame Increment Pointer (0028,0009), which in turn references a number of different "Vectors" as defined in Table "NM Multi-frame Module" in PS3.3. Like the Functional Groups above, these Vectors are required to contain one value for each frame in the Image, and so their contents must be modified to match the list of frames extracted, ensuring that the values retained are those corresponding to the extracted frames.
The requirement that the newly created image object generated in response to a Frame level retrieve request must be the same as the SOP class will frequently result in the need to create a single frame instance of an object that is more commonly a multi-frame object, but this should not cause any problems with the IOD rules, as all such objects may quite legally have Number of Frames = 1.
However, a single frame may well cause problems for a transfer syntax based on "video" such as those using MPEG-2, MPEG-4 AVC/H.264 or HEVC/H.265, and therefore the SCU when negotiating a C-GET should consider this problem, and include one or more transfer syntaxes suitable for holding single or non-contiguous frames where such a retrieval request is being made.
Frame numbers are indexes, not identifiers for frames. In every object, the frame numbers always start at 1 and increment by 1, and therefore they will not be the same after extraction into a new SOP Instance.
A SOP Instance may contain internal references to its own frames such as mask frames. These may need to be corrected.
There is no requirement in the Frame Level Retrieve Service for the SCP to cache or otherwise retain any of the information it uses to create the new SOP Instance, and therefore, an SCU submitting multiple requests for the same information cannot expect to receive the "same" object with the same Instance and Series UIDs each time. However, an SCP may choose to cache such instances, and if returning an instance identical to one previously created, then the same Instance and Series UIDs may be used. The newly created object is however guaranteed to be a valid SOP instance and an SCU may therefore choose to send such an instance to an SCP using C-STORE, in which case it should be handled exactly as any other Composite Instance of that SOP class.
The time base for the new composite instance should be the same as for the source image and should use the same time synchronization frame of reference. This allows the object to retain synchronization to any simultaneously acquired waveform data
Where the original object is MPEG-2, MPEG-4 AVC/H.264 or HEVC/H.265 with interleaved audio data in the MPEG-2 System, and where the retrieved object is also MPEG-2, MPEG-4 AVC/H.264 or HEVC/H.265 encoded, then audio could normally be preserved and maintain synchronization, but in other cases, the audio may be lost.
As with all modifications to existing SOP instances, an application should remove any data that it cannot guarantee to make consistent with the modifications it is making. Therefore, an application creating new images from Multi-frame Images should remove any Private Attributes about which it lacks sufficient information to allow safe and consistent modification. This behavior should be documented in the conformance statement.
This annex explains the use of the Specimen Module for pathology or laboratory specimen imaging.
The concept of a specimen is deeply connected to analysis (lab) workflow, the decisions made during analysis, and the "containers" used within the workflow.
Typical anatomic pathology cases represent the analysis of (all) tissue and/or non-biologic material (e.g., orthopedic hardware) removed in a single collection procedure (e.g., surgical operation/event, biopsy, scrape, aspiration etc.). A case is usually called an "Accession" and is given a single accession number in the Laboratory Information System.
During an operation, the surgeon may label and send one or more discrete collections of material (specimens) to pathology for analysis. By sending discrete, labeled collections of tissue in separate containers, the surgeon is requesting that each discrete labeled collection (specimen) be analyzed and reported independently - as a separate "Part" of the overall case. Therefore, each Part is an important, logical component of the laboratory workflow. Within each Accession, each Part is managed separately from the others and is identified uniquely in the workflow and in the Laboratory Information System.
During the initial gross (or "eyeball") examination of a Part, the pathologist may determine that some or all of the tissue in a Part should be analyzed further (usually through histology). The pathologist will place all or selected sub-samples of the material that makes up the Part into labeled containers (cassettes). After some processing, all the tissue in each cassette is embedded in a paraffin block (or epoxy resin for electron microscopy); at the end of the process, the block is physically attached to the cassette and has the same label. Therefore, each "Block" is an important, logical component of the laboratory workflow, which corresponds to physical material in a container for handling, separating and identifying material managed in the workflow. Within the workflow and Laboratory Information System, each Block is identified uniquely and managed separately from all others.
From a Block, technicians can slice very thin sections. One or more of these sections is placed on one or more slides. (Note, material from a Part can also be placed directly on a slide bypassing the block). A slide can be stained and then examined by the pathologists. Each "Slide", therefore, is an important, logical component of the laboratory workflow, which corresponds to physical material in a container for handling, separating and identifying material managed in the workflow. Within the workflow and within the Laboratory Information Systems, each Slide is identified uniquely and managed separately from all others.
While "Parts" to "Blocks" to "Slides" is by far the most common workflow in pathology, it is important to note that there can be numerous variations on this basic theme. In particular, laser capture microdissection and other slide sampling approaches for molecular pathology are in increasing use. Such new workflows require a generic approach in the Standard to identifying and managing specimen identification and processing, not one limited only to "Parts", "Blocks", and "Slides". Therefore, the Standard adopts a generic approach of describing uniquely identified Specimens in Containers.
A physical object (or a collection of objects) is a specimen when the laboratory considers it a single discrete, uniquely identified unit that is the subject of one or more steps in the laboratory (diagnostic) workflow.
To say the same thing in a slightly different way: "Specimen" is defined as a role played by a physical entity (one or more physical objects considered as single unit) when the entity is identified uniquely by the laboratory and is the direct subject of more steps in a laboratory (diagnostic) workflow.
It is worthwhile to expand on this very basic, high level definition because it contains implications that are important to the development and implementation of the DICOM Specimen Module. In particular:
A single discrete physical object or a collection of several physical objects can act as a single specimen as long as the collection is considered a unit during the laboratory (diagnostic) process step involved. In other words, a specimen may include multiple physical pieces, as long as they are considered a single unit in the workflow. For example, when multiple fragments of tissue are placed in a cassette, most laboratories would consider that collection of fragments as one specimen (one "block").
A specimen must be identified. It must have an ID that identifies it as a unique subject in the laboratory workflow. An entity that does not have an identifier is not a specimen.
Specimens are sampled and processed during a laboratory's (diagnostic) workflow. Sampling can create new (child) specimens. These child specimens are full specimens in their own right (they have unique identifiers and are direct subjects in one or more steps in the laboratory's (diagnostic) workflow. This property of specimens (that can be created from existing specimens by sampling) extends a common definition of specimen, which limits the word to the original object received for examination (e.g., from surgery).
However, child specimens can and do carry some Attributes from ancestors. For example, a tissue section cut from a formalin fixed block remains formalin fixed, and a tissue section cut from a block dissected from the proximal margin of a colon resection is still made up of tissue from the proximal margin. A description of a specimen therefore, may require description of its parent specimens.
A specimen is defined by decisions in the laboratory workflow. For example, in a typical laboratory, multiple tissue sections cut from a single block and placed on the same slide are considered a single specimen (as single unit identified by the slide number). However, if the histotechnologists had placed each tissue section on its own slide (and given each slide a unique number), each tissue section would be a specimen in its own right .
Specimen containers (or just "containers") play an important role in laboratory (diagnostic) processes. In most, but not all, process steps, specimens are held in containers, and a container often carries its specimen's ID. Sometimes the container becomes intimately involved with the specimen (e.g., a paraffin block), and in some situations (such as examining tissue under the microscope) the container (the slide and coverslip) become part of the optical path.
Containers have identifiers that are important in laboratory operations and in some imaging processes (such as whole slide imaging). The DICOM Specimen Module distinguishes the Container ID and the Specimen ID, making them different data elements. In many laboratories where there is one specimen per container, the value of the specimen ID and container ID will be same. However, there are use cases in which there are more than one specimen in a container. In those situations, the value of the container ID and the specimen IDs will be different (see Section NN.3.5).
Containers are often made up of components. For example, a "slide" is container that is made up of the glass slide, the coverslip and the "glue" the binds them together. The Module allows each component to be described in detail.
The Specimen Module (see PS3.3) defines formal DICOM Attributes for the identification and description of laboratory specimens when said specimens are the subject of a DICOM image. The Module is focused on the specimen and laboratory Attributes necessary to understand and interpret the image. These include:
Attributes that identify (specify) the specimen (within a given institution and across institutions).
Attributes that identify and describe the container in which the specimen resides. Containers are intimately associated with specimens in laboratory processes, often "carry" a specimen's identity, and sometimes are intimately part of the imaging process, as when a glass slide and coverslip are in the optical path in microscope imaging.
Attributes that describe specimen collection, sampling and processing. Knowing how a specimen was collected, sampled, processed and stained is vital in interpreting an image of a specimen. One can make a strong case that those laboratory steps are part of the imaging process.
Attributes that describe the specimen or its ancestors (see Section NN.2.1) when these descriptions help with the interpretation of the image.
Attributes that convey diagnostic opinions or interpretations are not within the scope of the Specimen Module. The DICOM Specimen Module does not seek to replace or mirror the pathologist's report.
The Laboratory Information System (LIS) is critical to management of workflow and processes in the pathology lab. It is ultimately the source of the identifiers applied to specimens and containers, and is responsible for recording the processes that were applied to specimens.
An important purpose of the Specimen Module is to store specimen information necessary to understand and interpret an image within the image information object, as images may be displayed in contexts where the Laboratory Information System is not available. Implementation of the Specimen Module therefore requires close, dynamic integration between the LIS and imaging systems in the laboratory workflow.
It is expected that the Laboratory Information Systems will participate in the population of the Specimen Module by passing the appropriate information to a DICOM compliant imaging system in the Modality Worklist, or by processing the image objects itself and populating the Specimen Module Attributes.
The nature of the LIS processing for imaging in the workflow will vary by product implementation. For example, an image of a gross specimen may be taken before a gross description is transcribed. A LIS might provide short term storage for images and update the description Attributes in the module after a particular event (such as sign out). The DICOM Standard is silent on such implementation issues, and only discusses the Attributes defined for the information objects exchanged between systems.
A pathology "case" is a unit of work resulting in a report with associated codified, billable acts. Case Level Attributes are generally outside the scope of the Specimen Module. However, a case is equivalent to a DICOM Requested Procedure, for which Attributes are specified in the DICOM Study level modules.
DICOM has existing methods to handle most "case level" issues, including accepting cases referred for other institutions, clinical history, status codes, etc. These methods are considered sufficient to support DICOM imaging in Pathology.
The concept of an "Accession Number" in Pathology has been determined to be sufficiently equivalent to an "Accession Number" in Radiology that the DICOM data element "Accession Number" at the Study level at the DICOM information model may be used for the Pathology Accession Number with essentially the existing definition.
It is understood that the value of the laboratory accession number is often incorporated as part of a Specimen ID. However, there is no presumption that this is always true, and the Specimen ID should not be parsed to determine an accession number. The accession number will always be sent in its own discrete Attribute.
While created with anatomic pathology in mind, the DICOM Specimen Module is designed to support specimen identification, collection, sampling and processing Attributes for a wide range of laboratory workflows. The Module is designed in a general way so not to limit the nature, scope, scale or complexity of laboratory (diagnostic) workflow that may generate DICOM images.
To provide specificity on the general process, the Module provides extendable lists of Container Types, Container Component Types, Specimen Types, Specimen Collection Types, Specimen Process Types and Staining Types. It is expected that the value sets for these "types" can be specialized to describe a wide range of laboratory procedures.
In typical anatomic pathology practice, and in Laboratory Information Systems, there are conventionally three identified levels of specimen preparation - part, block, and slide. These terms are actually conflations of the concepts of specimen and container. Not all processing can be described by only these three levels.
A part is the uniquely identified tissue or material collected from the patient and delivered to the pathology department for examination. Examples of parts would include a lung resection, colon biopsy at 20 cm, colon biopsy at 30 cm, peripheral blood sample, cervical cells obtained via scraping or brush, etc. A part can be delivered in a wide range of containers, usually labeled with the patients name, medical record number, and a short description of the specimen such as "colon biopsy at 20 cm". At accession, the lab creates a part identifier and writes it on the container. The container therefore conveys the part's identifier in the lab.
A block is a uniquely identified container, typically a cassette, containing one or more pieces of tissue dissected from the part (tissue dice). The tissue pieces may be considered, by some laboratories, as separate specimens. However in most labs, all the tissue pieces in a block are considered a single specimen.
A slide is a uniquely identified container, typically a glass microscope slide, containing tissue or other material. Common slide preparations include:
Virtually all specimens in a clinical laboratory are associated with a container, and specimens and containers are both important in imaging (see "Definitions", above). In most clinical laboratory situations there is a one to one relationship between specimens and containers. In fact, pathologists and LIS systems routinely consider a specimen and its container as single entity; e.g., the slide (a container) and the tissue sections (the specimen) are considered a single unit.
However, there are legitimate use cases in which a laboratory may place two or more specimens in the same container (see Section NN.4 for examples). Therefore, the DICOM Specimen Module distinguishes between a Specimen ID and a Container ID. However, in situations where there is only one specimen per container, the value of the Specimen ID and Container ID may be the same (as assigned by the LIS).
Some Laboratory Information System may, in fact, not support multiple specimens in a container, i.e., they manage only a single identifier used for the combination of specimen and container. This is not contrary to the DICOM Standard; images produced under such a system will simply always assert that there is only one specimen in each container. However, a pathology image display application that shows images from a variety of sources must be able to distinguish between container and specimen IDs, and handle the 1:N relationship.
In allowing for one container to have multiple specimens, the Specimen Module asserts that it is the Container, not the Specimen, that is the unique target of the image. In other words, one Container ID is required in the Specimen Module, and multiple Specimen IDs are allowed in the Specimen Sequence. See Figure NN.3-1.
If there is more than one specimen in a container, there must be a mechanism to identify and locate each specimen. When there is more than one specimen in a container, the Module allows various approaches to specify their locations. The Specimen Localization Content Item Sequence (0040,0620), through its associated TID 8004 “Specimen Localization”, allows the specimen to be localized by a distance in three dimensions from a reference point on the container, by a textual description of a location or physical Attribute such as a colored ink, or by its location as shown in a referenced image of the container. The referenced image may use an overlay, burned-in annotation, or an associated Presentation State SOP Instance to specify the location of the specimen.
Because the Module supports one container with multiple specimens, the Module can be used with an image of:
However the Module is not designed for use with an image of:
Multiple specimens that are not associated with the same container, e.g., two gross specimens (two Parts) on a photography table, each with a little plastic label with their specimen number.
Multiple containers that hold specimens (e.g., eight cassettes containing breast tissue being X-Rayed for calcium).
Such images may be included in the Study, but would not use the Specimen Module; they would, for instance, be general Visible Light Photographic images. Note, however, that the LIS might identify a "virtual container" that contains such multiple real containers, and manage that virtual container in the laboratory workflow.
In normal clinical practice, when there is one specimen per container, the value of the specimen identifier and the value of the container identifier will be the same. In Figure NN.4-1, each slide is prepared from a single tissue sample from a single block (cassette). The specimen and container type for the slide are present in the Section C.7.6.22 “Specimen Module” in PS3.3 , and not repeated in the Specimen Preparation Sequence Item for staining.
Figure NN.4-2 shows more than one tissue item on the same slide coming from the same block (but cut from different levels). The laboratory information system considers two tissue sections (on the same slide) to be separate specimens.
Two Specimen IDs will be assigned, different from the Container (Slide) ID. The specimens may be localized, for example, by descriptive text "Left" and "Right".
If the slide is imaged, a single image with more than one specimen may be created. In this case, both specimens must be identified in the Specimen Sequence of the Specimen Module. If only one specimen is imaged, only its Specimen ID must be included in the Specimen Sequence; however, both IDs may be included (e.g., if the image acquisition system cannot determine which specimens in/on the container are in the field of view).
Figure NN.4-3 shows processing where more than one tissue item is embedded in the same block within the same Cassette, but coming from different clinical specimens (parts). This may represent different lymph nodes embedded into one cassette, or different tissue dice coming from different parts in a frozen section examination, or tissue from the proximal margin and from the distal margin, and both were placed in the same cassette. Because the laboratory wanted to maintain the sample as separate specimens (to maintain their identity), the LIS gave them different IDs and the tissue from Part A was inked blue and the tissue from Part B was inked red.
The specimen IDs must be different from each other and from the container (cassette) ID. The specimens may be localized, for example, by descriptive text "Red" and "Blue" for Visual Coding of Specimen.
If a section is made from the block, each tissue section will include fragments from two specimens (red and blue). The slide (container) ID will be different from the section id (which will be different form each other).
If the slide is imaged, a single image with more than one specimen may be created but the different specimens must be identified and unambiguously localized within the container.
Figure NN.4-4 shows the result of two tissue collections placed on the same slide by the surgeon. E.g., in gynecological smears the different directions of smears might represent different parts (portio, cervix).
The specimen IDs must be different from each other and from the container (slide) ID. The specimens may be localized, for example, by descriptive text "Short direction smear" and "Long direction smear".
Slides created from a TMA block have small fragments of many different tissues coming from different patients, all of which may be processed at the same time, under the same conditions by a desired technique. These are typically utilized in research. See Figure NN.4-5. Tissue items (spots) on the TMA slide come from different tissue items (cores) in TMA blocks (from different donor blocks, different parts and different patients).
Each Specimen (spot) must have its own ID. The specimens may be localized, for example, by X-Y coordinates, or by a textual column-row identifier for the spot (e.g., "E3" for fifth column, third row).
If the TMA slide is imaged as a whole, e.g., at low resolution as an index, it must be given a "pseudo-patient" identifier (since it does not relate to a single patient). Images created for each spot should be assigned to the real patients.
The Specimen Module content is specified as a Macro as an editorial convention to facilitate its use in both Composite IODs and in the Modality Worklist Information Model.
The Module has two main sections. The first deals with the specimen container. The second deals with the specimens within that container. Because more than one specimen may reside in single container, the specimen section is set up as a sequence.
The Container section is divided two "sub-sections":
The Specimen Description Sequence contains five "sub-sections"
One deals with preparation of the specimen and its ancestor specimens (including sampling, processing and staining). Because of its importance in interpreting slide images, staining is distinguished from other processing. Specimen preparation is set up as sequence of process steps (multiple steps are possible); each step is in turn a sequence of Content Items (Attributes using coded vocabularies). This is the most complex part of the module.
One deals with the original anatomic location of the specimen in the patient.
One deals with specimen localization within a container. This is used to identify specimens when there is more than one in a container. It is set up as sequence.
This section includes examples of the use of the Specimen Module. Each example has two tables.
The first table contains the majority of the container and specimen elements of the Specimen Module. The second includes the Specimen Preparation Sequence (which documents the sampling, processing and staining steps).
In the first table, invocations of Macros have been expanded to their constituent Attributes. The Table does not include Type 3 (optional) Attributes that are not used for the example case.
The second table shows the Items of the Specimen Preparation Sequence and its subsidiary Specimen Preparation Step Content Item Sequence. That latter sequence itself has subsidiary Code Sequence Items, but these are shown in the canonical DICOM "triplet" format (see PS3.16), e.g., (44714003, SCT, "Left Upper Lobe of Lung"). In the table, inclusions of subsidiary templates have been expanded to their constituent Content Items. The Table does not include Type U (optional) Content Items that are not used for the example case.
Values in the colored columns of the two tables actually appear in the image object.
This is an example of how the Specimen Module can be populated for a gross specimen (a lung lobe resection received from surgery). The associated image would be a gross image taken in gross room.
Table NN.6-1. Specimen Module for Gross Specimen
The identifier for the container that contains the specimen(s) being imaged. |
Note that the container ID is required, even though the container itself does not figure in the image. |
|||
Type of container that contains the specimen(s) being imaged. Zero or one items shall be permitted in this sequence |
This would likely be a default container value for all gross specimens. The LIS does not keep information on the gross container type, so this is an empty sequence. |
|||
Sequence of identifiers and detailed description of the specimen(s) being imaged. One or more Items shall be included in this Sequence. |
||||
A departmental information system identifier for the Specimen. |
||||
The name or code for the institution that has assigned the Specimen Identifier. |
||||
The LIS "Specimen Received" field is mapped to this DICOM field |
||||
A: Received fresh for intraoperative consultation, labeled with the patient's name, number and "left upper lobe," is a pink-tan, wedge-shaped segment of soft tissue, 6.9 x 4.2 x 1.0 cm. The pleural surface is pink-tan and glistening with a stapled line measuring 12.0 cm. in length. The pleural surface shows a 0.5 cm. area of puckering. The pleural surface is inked black. The cut surface reveals a 1.2 x 1.1 cm, white-gray, irregular mass abutting the pleural surface and deep to the puckered area. The remainder of the cut surface is red-brown and congested. No other lesions are identified. Representative sections are submitted. |
This is a mapping from the LIS "Gross Description" field. Note that in Case S07-100 there were six parts. This means the LIS gross description field will have six sections (A - F). We would have to parse the gross description field into those parts (A-F) and then only incorporate section "A" into this Attribute. NOTE: One could consider listing all the Blocks associated with Part A. It would be easy to do and might give useful information. |
|||
Sequence of Items identifying the process steps used to prepare the specimen for image acquisition. One or more Items may be present. This Sequence includes description of the specimen sampling step from a parent specimen, potentially back to the original part collection. |
(see Table NN.6-2) |
|||
Sequence of Content Items identifying the processes used in one preparation step to prepare the specimen for image acquisition. One or more Items may be present. |
||||
Original anatomic location in patient of specimen. This location may be inherited from the parent specimen, or further refined by modifiers depending on the sampling procedure for this specimen. |
||||
This is an example of how the Specimen Module can be populated for a slide (from a lung lobe resection received from surgery). The associated image would be a whole slide image.
Table NN.6-3. Specimen Module for a Slide
The identifier for the container that contains the specimen(s) being imaged. |
||||
Type of container that contains the specimen(s) being imaged. Only a single item shall be permitted in this sequence |
This would likely be a default container value for all slide specimens. |
|||
Description of one or more components of the container (e.g., description of the slide and of the coverslip). One or more Items may be included in this Sequence. |
||||
Type of container component. One Item shall be included in this Sequence. |
||||
Sequence of identifiers and detailed description of the specimen(s) being imaged. One or more Items shall be included in this Sequence. |
||||
A departmental information system identifier for the Specimen. |
||||
The name or code for the institution that has assigned the Specimen Identifier. |
||||
This Attribute concatenates four LIS fields: 1. Specimen Received, 2. Cassette Summary, 3. Number of Pieces in Block, 4. Staining. This does not always work this nicely. Often one or more of fields is empty or confusing. |
||||
A: Received fresh for intraoperative consultation, labeled with the patient's name, number and "left upper lobe," is a pink-tan, wedge-shaped segment of soft tissue, 6.9 x 4.2 x 1.0 cm. The pleural surface is pink-tan and glistening with a stapled line measuring 12.0 cm. in length. The pleural surface shows a 0.5 cm. area of puckering. The pleural surface is inked black. The cut surface reveals a 1.2 x 1.1 cm, white-gray, irregular mass abutting the pleural surface and deep to the puckered area. The remainder of the cut surface is red-brown and congested. No other lesions are identified. Representative sections are submitted. |
This is a mapping from the LIS Gross Description Field and the Block Summary. Note that in Case S07-100, there were six parts. This means the LIS gross description field will have six sections (A - F). We would have to parse the gross description field into those parts (A-F) and then only incorporate section "A" into this Attribute. The same would be true of the Blocks. |
|||
Sequence of Items identifying the process steps used to prepare the specimen for image acquisition. One or more Items may be present. This Sequence includes description of the specimen sampling step from a parent specimen, potentially back to the original part collection. |
(see Table NN.6-4) |
|||
Sequence of Content Items identifying the processes used in one preparation step to prepare the specimen for image acquisition. One or more Items may be present. |
||||
Original anatomic location in patient of specimen. This location may be inherited from the parent specimen, or further refined by modifiers depending on the sampling procedure for this specimen. |
||||
The example Specimen Preparation Sequence first describes the most recent processing of the slide (staining), then goes back to show its provenance. Notice that there is no sampling process for the slide described here; the LIS did not record the step of slicing of blocks into slides.
Workflow management in the DICOM imaging environment utilizes the Modality Worklist (MWL) and Modality Performed Procedure Step (MPPS) services. Within the pathology department, these services support both human controlled imaging (e.g., gross specimen photography), as well as automated slide scanning modalities.
While this section provides an overview of the DICOM services for managing workflow, the reader is referred to the IHE Anatomic Pathology Domain Technical Framework for specific use cases and profiles for pathology imaging workflow management.
The contents of the Specimen Module may be conveyed in the Scheduled Specimen Sequence of the Modality Worklist query. This feature allows an imaging system (Modality Worklist SCU) to query for work items by Container ID. The worklist server (SCP) of the laboratory information system can then return all the necessary information for creating a DICOM specimen-related image. This information includes patient identity and the complete slide processing history (including stain applied). It may be used for imaging set-up and/or inclusion in the Image SOP Instance.
In addition to the Specimen Module Attributes, the set up of an automated whole slide scanner requires the acquisition parameters such as scan resolution, number of Z-planes, fluorescence wavelengths, etc. A managed set of such parameters is called a Protocol (see PS3.3), and the MWL response may contain a Protocol Code to control scanning set up. Additional set-up parameters can be passed as Content Items in the associated Protocol Context Sequence; this might be important when the reading pathologist requests a rescan of the slide with slightly different settings.
When scanning is initiated, the scanner reports the procedure step in a Modality Performed Procedure Step (MPPS) transaction.
Upon completion (or cancellation) of an image acquisition, the modality reports the work completed in an update to the MPPS. The MPPS can convey both the Container ID and the image UIDs, so that the workflow manager (laboratory information system) is advised of the image UIDs associated with each imaged specimen.
Intra-oral radiography typically involves acquisition of multiple images of various parts of the dentition. Many digital radiographic systems offer customized templates that are used for displaying the images in a study on the screen. These templates may also be referred to as mounts or view sets. The Structured Display Object represents a standard method of encoding and exchanging the layout and intended display of Structured Displays. A structured display object created in this manner could be stored with a study and exchanged with images to allow for complete reproduction of the original exam.
A patient visits a General Dentist where a Full Mouth Series Exam with 18 images is acquired. The dentist observes severe bone loss and refers the patient to a Periodontist. The 18 images from the Full Mouth Series along with a Structured Display are copied to a DICOM Interchange CD and sent with the patient to see the specialist. The Periodontist uses the CD to open the exam in his Dental Radiographic Software and consults via phone with the General Dentist. Both are able to observe the same exam showing the images on each user's display using the exact same layout.
A patient requests cosmetic surgery to enhance their facial appearance. The case requires consultation between an orthodontist in New York and an oral surgeon in California. The cephalometric series of 2D projections constructed from the volumetric CT data that is used for the discussion is arranged by a Structured Display for transfer between the two practitioners.
A dental provider wishes to capture a series of DICOM IO images for the patient’s dentition. The tooth morphology, teeth are divided into molars, premolars, canines and incisors, and a number of images for each jaw. The anatomic information was captured utilizing the triplet of schema. This standard code sequence is based on ISO 3950-2010, Dentistry - Designation system for teeth and areas of the oral cavity.
Every IO image should have anatomic information either through the primary or modifier sequence.
In most standard cases, images are oriented in structured layouts. These structured displays are useful to be shared between providers for reference purposes.
Table OO.1.1-1 shows structured display standard templates, where Viewset ID is based on the Japanese Society for Oral and Maxillofacial Radiology (JSOMR) classification provided by JIRA (Japan Medical Imaging and Radiological Systems Industries Association, www.jira-net.or.jp). Expected or typical teeth to be imaged location, region and designation codes are based on ISO 3950-2010, Dentistry - Designation system for teeth and areas of the oral cavity. For all the hanging protocols listed in OO.1.1-1, the value to use for Hanging Protocol Creator (0072,0008) is "JSOMR" and the value to use for Hanging Protocol Name (0072,0002) does not include "JSOMR" (e.g., "DL-S001A", not "JSOMR DL-S001A").