utilities
Index
Namespaces
Classes
Variables
Functions
- calibrateImageSpacing
- clip
- debounce
- getAnnotationNearPoint
- getAnnotationNearPointOnEnabledElement
- getCalibratedAspect
- getCalibratedLengthUnitsAndScale
- getCalibratedProbeUnitsAndValue
- getSphereBoundsInfo
- getViewportForAnnotation
- isObject
- jumpToSlice
- pointInShapeCallback
- pointInSurroundingSphereCallback
- pointToString
- roundNumber
- scroll
- throttle
- triggerAnnotationRender
- triggerAnnotationRenderForToolGroupIds
- triggerAnnotationRenderForViewportIds
- triggerEvent
Namespaces
boundingBox
getBoundingBoxAroundShape
extend2DBoundingBoxInViewAxis
Parameters
boundsIJK: [Point2, Point2, Point2]
[[iMin, iMax], [jMin, jMax], [kMin, kMax]]
numSlicesToProject: number
Returns [Types.Point2, Types.Point2, Types.Point2]
extended bounds
getBoundingBoxAroundShapeIJK
With a given vertices (points) coordinates in 2D or 3D in IJK, it calculates the minimum and maximum coordinate in each axis, and returns them. If clipBounds are provided it also clip the min, max to the provided width, height and depth
Parameters
points: Point2[] | Point3[]
shape corner points coordinates either in IJK (image coordinate)
optionaldimensions: Point2 | Point3
bounds to clip the min, max
Returns BoundingBox
[[xMin,xMax],[yMin,yMax], [zMin,zMax]]
getBoundingBoxAroundShapeWorld
With a given vertices (points) coordinates in 2D or 3D in World Coordinates, it calculates the minimum and maximum coordinate in each axis, and returns them. If clipBounds are provided it also clip the min, max to the provided width, height and depth
Parameters
points: Point2[] | Point3[]
shape corner points coordinates either in IJK (image coordinate)
optionalclipBounds: Point2 | Point3
bounds to clip the min, max
Returns BoundingBox
[[xMin,xMax],[yMin,yMax], [zMin,zMax]]
cine
Events
CINE Tool Events
CLIP_STARTED
CLIP_STOPPED
addToolState
Parameters
element: HTMLDivElement
data: ToolData
Returns void
getToolState
Parameters
element: HTMLDivElement
Returns CINETypes.ToolData | undefined
playClip
Starts playing a clip or adjusts the frame rate of an already playing clip. framesPerSecond is optional and defaults to 30 if not specified. A negative framesPerSecond will play the clip in reverse. The element must be a stack of images
Parameters
element: HTMLDivElement
HTML Element
playClipOptions: PlayClipOptions
Returns void
stopClip
Stops an already playing clip.
Parameters
element: HTMLDivElement
HTML Element
options: any = ...
Returns void
contourSegmentation
addContourSegmentationAnnotation
Adds a contour segmentation annotation to the specified segmentation.
Parameters
annotation: ContourSegmentationAnnotation
The contour segmentation annotation to add.
Returns void
areSameSegment
Check if two contour segmentations are from same segmentId, segmentationRepresentationUID and segmentIndex.
Parameters
firstAnnotation: ContourSegmentationAnnotation
First annotation
secondAnnotation: ContourSegmentationAnnotation
Second annotation
Returns boolean
True if they are from same segmentId, segmentationRepresentationUID and segmentIndex or false otherwise.
isContourSegmentationAnnotation
Parameters
annotation: Annotation
Returns annotation is ContourSegmentationAnnotation
removeContourSegmentationAnnotation
Removes a contour segmentation annotation from the given annotation. If the annotation does not have a segmentation data, this method returns quietly. This can occur for interpolated segmentations that have not yet been converted to real segmentations or other in-process segmentations.
Parameters
annotation: ContourSegmentationAnnotation
The contour segmentation annotation to remove.
Returns void
contours
interpolation
InterpolationManager
constructor
Returns default
statictoolNames
staticacceptAutoGenerated
Accepts the autogenerated interpolations, marking them as non-autogenerated. Can provide a selector to choose which ones to accept.
Rules for which items to select:
- Only choose annotations having the same segment index and segmentationID
- Exclude all contours having the same interpolation UID as any other contours on the same slice.
- Exclude autogenerated annotations
- Exclude any reset interpolationUIDs (this is a manual operation to allow creating a new interpolation)
- Find the set of interpolationUID’s remaining a. If the set is of size 0, assign a new interpolationUID b. If the set is of size 1, assign that interpolationUID c. Otherwise (optional, otherwise do b for size>1 randomly), for every remaining annotation, find the one whose center point is closest to the center point of the new annotation. Choose that interpolationUID
To allow creating new interpolated groups, the idea is to just use a new segment index, then have an operation to update the segment index of an interpolation set. That way the user can easily draw/see the difference, and then merge them as required. However, the base rules allow creating two contours on a single image to create a separate set.
Parameters
annotationGroupSelector: AnnotationGroupSelector
selector: AcceptInterpolationSelector = {}
Returns void
staticaddTool
Parameters
toolName: string
Returns void
statichandleAnnotationCompleted
When an annotation is completed, if the configuration includes interpolation, then find matching interpolations and interpolation between this segmentation and the other segmentations of the same type.
Parameters
evt: AnnotationCompletedEventType
Returns void
statichandleAnnotationDelete
Delete interpolated annotations when their endpoints are deleted.
Parameters
evt: AnnotationRemovedEventType
Returns void
statichandleAnnotationUpdate
This method gets called when an annotation changes. It will then trigger related already interpolated annotations to be updated with the modified data.
Parameters
evt: AnnotationModifiedEventType
Returns void
AnnotationToPointData
constructor
Returns AnnotationToPointData
staticTOOL_NAMES
staticconvert
Parameters
annotation: any
index: any
metadataProvider: any
Returns { ContourSequence: any; ROIDisplayColor: number[]; ReferencedROINumber: any }
ContourSequence: any
ROIDisplayColor: number[]
ReferencedROINumber: any
staticregister
Parameters
toolClass: any
Returns void
contourFinder
Type declaration
findContours: (lines: any) => any
Parameters
lines: any
Returns any
findContoursFromReducedSet: (lines: any) => any
Parameters
lines: any
Returns any
detectContourHoles
Type declaration
processContourHoles: (contours: any, points: any, useXOR?: boolean) => any
Check if contours have holes, if so update contour accordingly
Parameters
contours: any
points: any
useXOR: boolean = true
Returns any
acceptAutogeneratedInterpolations
Accepts interpolated annotations, marking them as autoGenerated false.
Parameters
annotationGroupSelector: AnnotationGroupSelector
viewport or FOR to select annotations on
selector: AcceptInterpolationSelector
nested selection criteria
Returns void
areCoplanarContours
Check if two contour segmentation annotations are coplanar.
A plane may be represented by a normal and a distance then to know if they are coplanar we need to:
- check if the normals of the two annotations are pointing to the same direction or to opposite directions (dot product equal to 1 or -1 respectively)
- Get one point from each polyline and project it onto the normal to get the distance from the origin (0, 0, 0).
Parameters
firstAnnotation: ContourAnnotation
secondAnnotation: ContourAnnotation
Returns boolean
calculatePerimeter
Calculates the perimeter of a polyline.
Parameters
polyline: number[][]
The polyline represented as an array of points.
closed: boolean
Indicates whether the polyline is closed or not.
Returns number
The perimeter of the polyline.
findHandlePolylineIndex
Finds the index in the polyline of the specified handle. If the handle doesn’t match a polyline point, then finds the closest polyline point.
Assumes polyline is in the same orientation as the handles.
Parameters
annotation: ContourAnnotation
to find the polyline and handles in
handleIndex: number
the index of hte handle to look for. Negative values are treated relative to the end of the handle index.
Returns number
Index in polyline of the closest handle * 0 for handleIndex 0 * length for
handleIndex===handles length
generateContourSetsFromLabelmap
Parameters
__namedParameters: Object
Returns any[]
getContourHolesDataCanvas
Get the polylines for the child annotations (holes)
Parameters
annotation: Annotation
Annotation
viewport: IViewport
Viewport used to convert the points from world to canvas space
Returns Types.Point2[][]
An array that contains all child polylines
getContourHolesDataWorld
Get child polylines data in world space for contour annotations that represent the holes
Parameters
annotation: Annotation
Annotation
Returns Types.Point3[][]
An array that contains all child polylines (holes) in world space
getDeduplicatedVTKPolyDataPoints
Iterate through polyData from vtkjs and merge any points that are the same then update merged point references within lines array
Parameters
polyData: any
vtkPolyData
bypass: boolean = false
bypass the duplicate point removal
Returns { lines: { a: any; b: any }[]; points: any[] }
the updated polyData
lines: { a: any; b: any }[]
points: any[]
updateContourPolyline
Update the contour polyline data
Parameters
annotation: ContourAnnotation
Contour annotation
polylineData: { closed?: boolean; points: Point2[]; targetWindingDirection?: ContourWindingDirection }
Polyline data (points, winding direction and closed)
transforms: { canvasToWorld: (point: Point2) => Point3 }
Methods to convert points to/from canvas and world spaces
optionaloptions: { decimate?: { enabled?: boolean; epsilon?: number } }
Options
- decimate: allow to set some parameters to decimate the polyline reducing
the amount of points stored which also affects how fast it will draw the
annotation in a viewport, compute the winding direction, append/remove
contours and create holes. A higher
epsilon
value results in a polyline with less points.
- decimate: allow to set some parameters to decimate the polyline reducing
the amount of points stored which also affects how fast it will draw the
annotation in a viewport, compute the winding direction, append/remove
contours and create holes. A higher
Returns void
drawing
getTextBoxCoordsCanvas
Determine the coordinates that will place the textbox to the right of the annotation.
Parameters
annotationCanvasPoints: Point2[]
The canvas points of the annotation’s handles.
Returns Types.Point2
- The coordinates for default placement of the textbox.
dynamicVolume
generateImageFromTimeData
Gets the scalar data for a series of time frames from a 4D volume, returns an array of scalar data after performing AVERAGE, SUM or SUBTRACT to be used to create a 3D volume
Parameters
dynamicVolume: IDynamicImageVolume
operation: string
operation to perform on time frame data, operations include SUM, AVERAGE, and SUBTRACT (can only be used with 2 time frames provided)
optionalframeNumbers: number[]
an array of frame indices to perform the operation on, if left empty, all frames will be used
Returns Float32Array
getDataInTime
Gets the scalar data for a series of time points for either a single coordinate or a segmentation mask, it will return the an array of scalar data for a single coordinate or an array of arrays for a segmentation.
Parameters
dynamicVolume: IDynamicImageVolume
4D volume to compute time point data from
options: { frameNumbers?: any; imageCoordinate?: any; maskVolumeId?: any }
frameNumbers: which frames to use as timepoints, if left blank, gets data timepoints over all frames maskVolumeId: segmentationId to get timepoint data of imageCoordinate: world coordinate to get timepoint data of
Returns number[] | number[][]
math
BasicStatsCalculator
BasicStatsCalculator
constructor
Returns default
staticrun
Type declaration
Parameters
__namedParameters: Object
Returns void
staticgetStatistics
Basic function that calculates statictics for a given array of points.
Returns NamedStatistics
An object that contains : max : The maximum value of the array mean : mean of the array stdDev : standard deviation of the array stdDevWithSumSquare : standard deviation of the array using sum² array : An array of hte above values, in order.
staticstatsCallback
This callback is used when we verify if the point is in the annotion drawn so we can get every point in the shape to calculate the statistics
Parameters
value: Object
of the point in the shape of the annotation
Returns void
abstractCalculator
constructor
Returns Calculator
staticgetStatistics
Type declaration
Gets the statistics as both an array of values, as well as the named values.
Returns NamedStatistics
staticrun
Type declaration
Parameters
__namedParameters: Object
Returns void
aabb
distanceToPoint
Calculates the squared distance of a point to an AABB using 2D Box SDF (Signed Distance Field)
The SDF of a Box https://www.youtube.com/watch?v=62-pRVZuS5c
Parameters
aabb: AABB2
Axis-aligned bound box (minX, minY, maxX and maxY)
point: Point2
2D point
Returns number
The squared distance between the 2D point and the AABB
distanceToPointSquared
Calculates the distance of a point to an AABB using 2D Box SDF (Signed Distance Field)
The SDF of a Box https://www.youtube.com/watch?v=62-pRVZuS5c
Parameters
aabb: AABB2
Axis-aligned bound box
point: Point2
2D point
Returns number
The closest distance between the 2D point and the AABB
intersectAABB
Check if two axis-aligned bounding boxes intersect
Parameters
aabb1: AABB2
First AABB
aabb2: AABB2
Second AABB
Returns boolean
True if they intersect or false otherwise
ellipse
getCanvasEllipseCorners
It takes the canvas coordinates of the ellipse corners and returns the top left and bottom right corners of it
Parameters
ellipseCanvasPoints: CanvasCoordinates
The coordinates of the ellipse in the canvas.
Returns Types.Point2[]
An array of two points.
pointInEllipse
Given an ellipse and a point, return true if the point is inside the ellipse
Parameters
ellipse: any
The ellipse object to check against.
pointLPS: any
The point in LPS space to test.
inverts: Inverts = {}
An object to cache the inverted radius squared values, if you are testing multiple points against the same ellipse then it is recommended to pass in the same object to cache the values. However, there is a simpler way to do this by passing in the fast flag as true, then on the first iteration the values will be cached and on subsequent iterations the cached values will be used.
Returns boolean
A boolean value.
precalculatePointInEllipse
This will perform some precalculations to make things faster. Ideally, use the ‘precalculated’ function inside inverts to call the test function. This minimizes re-reading of variables and only needs the LPS passed each time. That is:
const inverts = precalculatePointInEllipse(ellipse);
if( inverts.precalculated(pointLPS) ) ...Parameters
ellipse: any
inverts: Inverts = {}
Returns Inverts
lineSegment
distanceToPoint
Calculates the distance of a point to a line
Parameters
lineStart: Point2
x,y coordinates of the start of the line
lineEnd: Point2
x,y coordinates of the end of the line
point: Point2
x,y of the point
Returns number
distance
distanceToPointSquared
Calculates the distance-squared of a point to a line segment
Parameters
lineStart: Point2
x,y coordinates of the start of the line
lineEnd: Point2
x,y coordinates of the end of the line
point: Point2
x,y of the point
Returns number
distance-squared
distanceToPointSquaredInfo
Calculate the closest point and the squared distance between a reference point and a line segment.
It projects the reference point onto the line segment but it shall be bounded by the start/end points since this is a line segment and not a line which could be extended.
Parameters
lineStart: Point2
Start point of the line segment
lineEnd: Point2
End point of the line segment
point: Point2
Reference point
Returns { distanceSquared: number; point: Types.Point2 }
Closest point and the squared distance between a
point
and a line segment defined bylineStart
andlineEnd
pointsdistanceSquared: number
point: Types.Point2
intersectLine
Calculates the intersection point between two lines in the 2D plane
Parameters
line1Start: Point2
x,y coordinates of the start of the first line
line1End: Point2
x,y coordinates of the end of the first line
line2Start: Point2
x,y coordinates of the start of the second line
line2End: Point2
x,y coordinates of the end of the second line
Returns number[]
[x,y] - point x,y of the point
isPointOnLineSegment
Test if a point is on a line segment
Parameters
lineStart: Point2
Line segment start point
lineEnd: Point2
Line segment end point
point: Point2
Point to test
Returns boolean
True if the point lies on the line segment or false otherwise
point
distanceToPoint
Calculates the distance of a point to another point
Parameters
p1: Point
x,y or x,y,z of the point
p2: Point
x,y or x,y,z of the point
Returns number
distance
distanceToPointSquared
Calculates the distance squared of a point to another point
Parameters
p1: Point
x,y or x,y,z of the point
p2: Point
x,y or x,y,z of the point
Returns number
distance
mirror
Get a mirrored point along the line created by two points where one of them is the static (“anchor”) point and the other one is the point to be mirroed.
Parameters
mirrorPoint: Point2
2D Point to be mirroed
staticPoint: Point2
Static 2D point
Returns Types.Point2
Mirroed 2D point
polyline
addCanvasPointsToArray
Adds one or more points to the array at a resolution defined by the underlying image.
Parameters
element: HTMLDivElement
canvasPoints: Point2[]
newCanvasPoint: Point2
commonData: PlanarFreehandROICommonData
Returns number
containsPoint
Checks if a 2D point is inside the polyline.
A point is inside a curve/polygon if the number of intersections between the horizontal ray emanating from the given point and to the right and the line segments is odd. https://www.eecs.umich.edu/courses/eecs380/HANDOUTS/PROJ2/InsidePoly.html
Note that a point on the polyline is considered inside.
Parameters
polyline: Point2[]
Polyline points (2D)
point: Point2
2D Point
options: { closed?: boolean; holes?: Point2[][] } = ...
Returns boolean
True if the point is inside the polyline or false otherwise
containsPoints
Checks if a polyline contains a set of points.
Parameters
polyline: Point2[]
Polyline points (2D)
points: Point2[]
2D points to verify
Returns boolean
True if all points are inside the polyline or false otherwise
decimate
Ramer–Douglas–Peucker algorithm implementation to decimate a polyline to a similar polyline with fewer points
https://en.wikipedia.org/wiki/Ramer%E2%80%93Douglas%E2%80%93Peucker_algorithm https://rosettacode.org/wiki/Ramer-Douglas-Peucker_line_simplification https://karthaus.nl/rdp/
Parameters
polyline: Point2[]
Polyline to decimate
epsilon: number = DEFAULT_EPSILON
A maximum given distance ‘epsilon’ to decide if a point should or shouldn’t be added the decimated polyline version. In each iteration the polyline is split into two polylines and the distance of each point from those new polylines are checked against the line that connects the first and last points.
Returns Point2[]
Decimated polyline
getAABB
Calculates the axis-aligned bounding box (AABB) of a polyline.
Parameters
polyline: number[] | Point2[] | Point3[]
The polyline represented as an array of points.
optionaloptions: { numDimensions: number }
Additional options for calculating the AABB.
Returns Types.AABB2 | Types.AABB3
The AABB of the polyline. If the polyline represents points in 3D space, returns an AABB3 object with properties minX, maxX, minY, maxY, minZ, and maxZ. If the polyline represents points in 2D space, returns an AABB2 object with properties minX, maxX, minY, and maxY.
getArea
Calculates the area of an array of
Point2
points using the shoelace algorithm.The units of the area are in the same units as the points are in. E.g. if the points are in canvas, then the result is in canvas pixels ^2; If they are in mm, then the result is in mm^2; etc.
Parameters
points: Point2[]
Returns number
getClosestLineSegmentIntersection
Checks whether the line (
p1
,q1
) intersects any of the other lines in thepoints
, and returns the closest value.Parameters
points: Point2[]
Polyline points
p1: Point2
Start point of the line segment
q1: Point2
End point of the line segment
closed: boolean = true
Test the intersection against the line that connects the first to the last when closed
Returns { distance: number; segment: Types.Point2 } | undefined
The closest line segment from polyline that intersects the line segment [p1, q1]
getFirstLineSegmentIntersectionIndexes
Checks whether the line (
p1
,q1
) intersects any of the other lines in thepoints
, and returns the first value.Parameters
points: Point2[]
Polyline points
p1: Point2
First point of the line segment that is being tested
q1: Point2
Second point of the line segment that is being tested
closed: boolean = true
Test the intersection with the line segment that connects the last and first points of the polyline
Returns Types.Point2 | undefined
Indexes of the line segment points from the polyline that intersects [p1, q1]
getLineSegmentIntersectionsCoordinates
Returns all intersections points between a line segment and a polyline
Parameters
points: Point2[]
p1: Point2
q1: Point2
closed: boolean = true
Returns Types.Point2[]
getLineSegmentIntersectionsIndexes
Get all intersections between a polyline and a line segment.
Parameters
polyline: Point2[]
Polyline points
p1: Point2
Start point of line segment
q1: Point2
End point of line segment
closed: boolean = true
Test the intersection against the line segment that connects the last to the first point when set to true
Returns Types.Point2[]
Start/end point indexes of all line segments that intersect (p1, q1)
getNormal2
Calculate the normal of a 2D polyline https://www.youtube.com/watch?v=GpsKrAipXm8&t=1982s
Parameters
polyline: Point2[]
Planar polyline in 2D space
Returns Types.Point3
Normal of the 2D planar polyline
getNormal3
Calculate the normal of a 3D planar polyline
Parameters
polyline: Point3[]
Planar polyline in 3D space
Returns Types.Point3
Normal of the 3D planar polyline
getSignedArea
Returns the area with signal of a 2D polyline https://www.youtube.com/watch?v=GpsKrAipXm8&t=1900s
This functions has a runtime very close to
getArea
and it is recommended to be called only if you need the area signal (eg: calculate polygon normal). If you do not need the area signal you should always callgetArea
.Parameters
polyline: Point2[]
Polyline points (2D)
Returns number
Area of the polyline (with signal)
getSubPixelSpacingAndXYDirections
Gets the desired spacing for points in the polyline for the
PlanarFreehandROITool
in the x and y canvas directions, as well as returning these canvas directions in world space.Parameters
viewport: default | default
The Cornerstone3D
StackViewport
orVolumeViewport
.subPixelResolution: number
The number to divide the image pixel spacing by to get the sub pixel spacing. E.g.
10
will return spacings 10x smaller than the native image spacing.
Returns { spacing: Point2; xDir: Point3; yDir: Point3 }
The spacings of the X and Y directions, and the 3D directions of the x and y directions.
spacing: Point2
xDir: Point3
yDir: Point3
getWindingDirection
Calculate the winding direction (CW or CCW) of a polyline
Parameters
polyline: Point2[]
Polyline (2D)
Returns number
1 for CW or -1 for CCW polylines
intersectPolyline
Check if two polylines intersect comparing line segment by line segment.
Parameters
sourcePolyline: Point2[]
Source polyline
targetPolyline: Point2[]
Target polyline
Returns boolean
True if the polylines intersect or false otherwise
isClosed
A polyline is considered closed if the start and end points are at the same position
Parameters
polyline: Point2[]
Polyline points (2D)
Returns boolean
True if the polyline is already closed or false otherwise
isPointInsidePolyline3D
Determines whether a 3D point is inside a polyline in 3D space.
The algorithm works by reducing the polyline and point to 2D space, and then using the 2D algorithm to determine whether the point is inside the polyline.
Parameters
point: Point3
The 3D point to test.
polyline: Point3[]
The polyline represented as an array of 3D points.
options: { holes?: Point3[][] } = {}
Returns boolean
A boolean indicating whether the point is inside the polyline.
mergePolylines
Merge two planar polylines (2D)
Parameters
targetPolyline: Point2[]
sourcePolyline: Point2[]
Returns Point2[]
pointCanProjectOnLine
Returns
true
if the pointp
can project onto point (p1
,p2
), and if this projected point is less thanproximity
units away.Parameters
p: Point2
p1: Point2
p2: Point2
proximity: number
Returns boolean
pointsAreWithinCloseContourProximity
Returns true if points
p1
andp2
are withincloseContourProximity
.Parameters
p1: Point2
p2: Point2
closeContourProximity: number
Returns boolean
projectTo2D
Projects a polyline from 3D to 2D by reducing one dimension.
Parameters
polyline: Point3[]
The polyline to be projected.
Returns { projectedPolyline: Point2[]; sharedDimensionIndex: any }
An object containing the shared dimension index and the projected polyline in 2D.
projectedPolyline: Point2[]
sharedDimensionIndex: any
subtractPolylines
Subtract two planar polylines (2D)
Parameters
targetPolyline: Point2[]
sourcePolyline: Point2[]
Returns Types.Point2[][]
rectangle
distanceToPoint
Calculates distance of the point to the rectangle. It calculates the minimum distance between the point and each line segment of the rectangle.
Parameters
rect: number[]
coordinates of the rectangle [left, top, width, height]
point: Point2
[x,y] coordinates of a point
Returns number
vec2
findClosestPoint
Find the closest point to the target point
Parameters
sourcePoints: Point2[]
The potential source points.
targetPoint: Point2
The target point, used to find the closest source.
Returns Types.Point2
The closest point in the array of point sources
liangBarksyClip
Parameters
a: any
b: any
box: any
[xmin, ymin, xmax, ymax]
optionalda: any
optionaldb: any
Returns 1 | 0
orientation
publicgetOrientationStringLPS
Returns the orientation of the vector in the patient coordinate system.
Parameters
vector: Point3
Input array
Returns string
The orientation in the patient coordinate system.
publicinvertOrientationStringLPS
Inverts an orientation string.
Parameters
orientationString: string
The orientation.
Returns string
The inverted orientationString.
planar
filterAnnotationsForDisplay
Given the viewport and the annotations, it filters the annotations array and only return those annotation that should be displayed on the viewport
Parameters
viewport: IViewport
annotations: Annotations
Annotations
filterOptions: ReferenceCompatibleOptions = {}
Returns Annotations
A filtered version of the annotations.
filterAnnotationsWithinSlice
given some
Annotations
, and the slice defined by the camera’s normal direction and the spacing in the normal, filter theAnnotations
which is within the slice.Parameters
annotations: Annotations
Annotations
camera: ICamera
The camera
spacingInNormalDirection: number
The spacing in the normal direction
Returns Annotations
The filtered
Annotations
.
getPointInLineOfSightWithCriteria
Returns a point based on some criteria (e.g., minimum or maximum intensity) in the line of sight (on the line between the passed worldPosition and camera position). It iterated over the points with a step size on the line.
Parameters
viewport: default
Volume viewport
worldPos: Point3
World coordinates of the clicked location
targetVolumeId: string
target Volume ID in the viewport
criteriaFunction: (intensity: number, point: Point3) => Point3
A function that returns the point if it passes a certain written logic, for instance, it can be a maxValue function that keeps the records of all intensity values, and only return the point if its intensity is greater than the maximum intensity of the points passed before.
stepSize: number = 0.25
Returns Types.Point3
the World pos of the point that passes the criteriaFunction
getWorldWidthAndHeightFromCorners
Given two world positions and an orthogonal view to an
imageVolume
defined by aviewPlaneNormal
and aviewUp
, get the width and height in world coordinates of the rectangle defined by the two points. The implementation works both with orthogonal non-orthogonal rectangles.Parameters
viewPlaneNormal: Point3
The normal of the view.
viewUp: Point3
The up direction of the view.
topLeftWorld: Point3
The first world position.
bottomRightWorld: Point3
The second world position.
Returns { worldHeight: number; worldWidth: number }
The
worldWidth
andworldHeight
.worldHeight: number
worldWidth: number
isPlaneIntersectingAABB
Checks if a plane intersects with an Axis-Aligned Bounding Box (AABB).
Parameters
origin: any
The origin point of the plane.
normal: any
The normal vector of the plane.
minX: any
The minimum x-coordinate of the AABB.
minY: any
The minimum y-coordinate of the AABB.
minZ: any
The minimum z-coordinate of the AABB.
maxX: any
The maximum x-coordinate of the AABB.
maxY: any
The maximum y-coordinate of the AABB.
maxZ: any
The maximum z-coordinate of the AABB.
Returns boolean
A boolean indicating whether the plane intersects with the AABB.
planarFreehandROITool
smoothAnnotation
Interpolates a given annotation from a given enabledElement. It mutates annotation param. The param knotsRatioPercentage defines the percentage of points to be considered as knots on the interpolation process. Interpolation will be skipped in case: annotation is not present in enabledElement (or there is no toolGroup associated with it), related tool is being modified.
Parameters
enabledElement: IEnabledElement
annotation: PlanarFreehandROIAnnotation
knotsRatioPercentage: number
Returns boolean
polyDataUtils
getPoint
Gets a point from an array of numbers given its index
Parameters
points: any
array of number, each point defined by three consecutive numbers
idx: any
index of the point to retrieve
Returns Types.Point3
getPolyDataPointIndexes
Extract contour point sets from the outline of a poly data actor
Parameters
polyData: vtkPolyData
vtk polyData
Returns any[]
getPolyDataPoints
Extract contour points from a poly data object
Parameters
polyData: vtkPolyData
vtk polyData
Returns any[]
rectangleROITool
getBoundsIJKFromRectangleAnnotations
Parameters
annotations: any
referenceVolume: any
options: Options = ...
Returns any
isAxisAlignedRectangle
Determines whether a given rectangle in a 3D space (defined by its corner points in IJK coordinates) is aligned with the IJK axes.
Parameters
rectangleCornersIJK: any
The corner points of the rectangle in IJK coordinates
Returns boolean
True if the rectangle is aligned with the IJK axes, false otherwise
segmentation
contourAndFindLargestBidirectional
Generates a contour object over the segment, and then uses the contouring to find the largest bidirectional object that can be applied within the acquisition plane that is within the segment index, or the contained segment indices.
Parameters
segmentation: any
Returns any
createBidirectionalToolData
Creates data suitable for the BidirectionalTool from the basic bidirectional data object.
Parameters
bidirectionalData: BidirectionalData
viewport: any
Returns Annotation
createImageIdReferenceMap
Creates a map that associates each imageId with a set of segmentation imageIds. Note that this function assumes that the imageIds and segmentationImageIds arrays are the same length and same order.
Parameters
imageIdsArray: string[]
An array of imageIds.
segmentationImageIds: string[]
An array of segmentation imageIds.
Returns Map<string, string>
A map that maps each imageId to a set of segmentation imageIds.
createLabelmapVolumeForViewport
Create a new 3D segmentation volume from the default imageData presented in the first actor of the viewport. It looks at the metadata of the imageData to determine the volume dimensions and spacing if particular options are not provided.
Parameters
input: { options?: { dimensions: Point3; direction: Mat3; metadata: Metadata; origin: Point3; scalarData: Float32Array | Int16Array | Uint16Array | Uint8Array; spacing: Point3; targetBuffer: { type: Float32Array | Uint16Array | Uint8Array | Int8Array }; volumeId: string }; renderingEngineId: string; segmentationId?: string; viewportId: string }
Returns Promise<string>
A promise that resolves to the Id of the new labelmap volume.
createMergedLabelmapForIndex
Given a list of labelmaps (with the possibility of overlapping regions), and a segmentIndex it creates a new labelmap with the same dimensions as the input labelmaps, but merges them into a single labelmap for the segmentIndex. It wipes out all other segment Indices. This is useful for calculating statistics regarding a specific segment when there are overlapping regions between labelmap (e.g. TMTV)
Parameters
labelmaps: IImageVolume[]
Array of labelmaps
segmentIndex: number = 1
The segment index to merge
volumeId: string = 'mergedLabelmap'
Returns Types.IImageVolume
Merged labelmap
floodFill
floodFill.js - Taken from MIT OSS lib - https://github.com/tuzz/n-dimensional-flood-fill Refactored to ES6. Fixed the bounds/visits checks to use integer keys, restricting the total search spacing to +/- 32k in each dimension, but resulting in about a hundred time performance gain for larger regions since JavaScript does not have a hash map to allow the map to work on keys.
Parameters
getter: FloodFillGetter
The getter to the elements of your data structure, e.g. getter(x,y) for a 2D interprettation of your structure.
seed: Point2 | Point3
The seed for your fill. The dimensionality is infered by the number of dimensions of the seed.
options: FloodFillOptions = {}
Returns FloodFillResult
Flood fill results
getBrushSizeForToolGroup
Gets the brush size for the first brush-based tool instance in a given tool group.
Parameters
toolGroupId: string
The ID of the tool group to get the brush size for.
optionaltoolName: string
The name of the specific tool to get the brush size for (Optional) If not provided, the first brush-based tool instance in the tool group will be used.
Returns void
The brush size of the selected tool instance, or undefined if no brush-based tool instance is found.
getBrushThresholdForToolGroup
Parameters
toolGroupId: string
Returns any
getBrushToolInstances
Parameters
toolGroupId: string
optionaltoolName: string
Returns any[]
getDefaultRepresentationConfig
It returns a configuration object for the given representation type.
Parameters
segmentation: Segmentation
Returns LabelmapConfig
A representation configuration object.
getHoveredContourSegmentationAnnotation
Retrieves the index of the hovered contour segmentation annotation for a given segmentation ID.
Parameters
segmentationId: any
The ID of the segmentation.
Returns number
The index of the hovered contour segmentation annotation, or undefined if none is found.
getSegmentAtLabelmapBorder
Retrieves the segment index at the border of a labelmap in a segmentation.
Parameters
segmentationId: string
The ID of the segmentation.
worldPoint: Point3
The world coordinates of the point.
options: Options
Additional options.
Returns number
The segment index at the labelmap border, or undefined if not found.
getSegmentAtWorldPoint
Get the segment at the specified world point in the viewport.
Parameters
segmentationId: string
The ID of the segmentation to get the segment for.
worldPoint: Point3
The world point to get the segment for.
options: Options = ...
Returns number
The index of the segment at the world point, or undefined if not found.
getUniqueSegmentIndices
Retrieves the unique segment indices from a given segmentation.
Parameters
segmentationId: any
The ID of the segmentation.
Returns any
An array of unique segment indices.
invalidateBrushCursor
Invalidates the brush cursor for a specific tool group. This function triggers the update of the brush being rendered. It also triggers an annotation render for any viewports on the tool group.
Parameters
toolGroupId: string
The ID of the tool group.
Returns void
isValidRepresentationConfig
Given a representation type and a configuration, return true if the configuration is valid for that representation type
Parameters
representationType: string
The type of segmentation representation
config: RepresentationConfig
RepresentationConfig
Returns boolean
A boolean value.
rectangleROIThresholdVolumeByRange
It uses the provided rectangleROI annotations (either RectangleROIThreshold, or RectangleROIStartEndThreshold) to compute an ROI that is the intersection of all the annotations. Then it uses the rectangleROIThreshold utility to threshold the volume.
Parameters
annotationUIDs: string[]
rectangleROI annotationsUIDs to use for ROI
segmentationVolume: IImageVolume
the segmentation volume
thresholdVolumeInformation: ThresholdInformation[]
object array containing the volume data and range threshold values
options: ThresholdOptions
options for thresholding
Returns Types.IImageVolume
segmentContourAction
Parameters
element: HTMLDivElement
configuration: any
Returns any
setBrushSizeForToolGroup
Sets the brush size for all brush-based tools in a given tool group.
Parameters
toolGroupId: string
The ID of the tool group to set the brush size for.
brushSize: number
The new brush size to set.
optionaltoolName: string
The name of the specific tool to set the brush size for (optional) If not provided, all brush-based tools in the tool group will be affected.
Returns void
setBrushThresholdForToolGroup
Parameters
toolGroupId: string
threshold: Point2
otherArgs: Record<string, unknown> = ...
Returns void
thresholdSegmentationByRange
It thresholds a segmentation volume based on a set of threshold values with respect to a list of volumes and respective threshold ranges.
Parameters
segmentationVolume: IImageVolume
the segmentation volume to be modified
segmentationIndex: number
the index of the segmentation to modify
thresholdVolumeInformation: ThresholdInformation[]
array of objects containing volume data and a range (lower and upper values) to threshold
overlapType: number
indicates if the user requires all voxels pass (overlapType = 1) or any voxel pass (overlapType = 0)
Returns Types.IImageVolume
thresholdVolumeByRange
It thresholds a segmentation volume based on a set of threshold values with respect to a list of volumes and respective threshold ranges.
Parameters
segmentationVolume: IImageVolume
the segmentation volume to be modified
thresholdVolumeInformation: ThresholdInformation[]
array of objects containing volume data and a range (lower and upper values) to threshold
options: ThresholdRangeOptions
the options for thresholding As there is a chance the volumes might have different dimensions and spacing, could be the case of no 1 to 1 mapping. So we need to work with the idea of voxel overlaps (1 to many mappings). We consider all intersections valid, to avoid the complexity to calculate a minimum voxel intersection percentage. This function, given a voxel center and spacing, calculates the overlap of the voxel with another volume and range check the voxels in the overlap. Three situations can occur: all voxels pass the range check, some voxels pass or none voxels pass. The overlapType parameter indicates if the user requires all voxels pass (overlapType = 1) or any voxel pass (overlapType = 0)
Returns Types.IImageVolume
segmented volume
triggerSegmentationRender
It triggers a render for all the segmentations of the tool group with the given Id.
Parameters
toolGroupId: string
The Id of the tool group to render.
Returns void
touch
copyPoints
Parameters
points: ITouchPoints
Returns ITouchPoints
copyPointsList
Copies a set of points.
Parameters
points: ITouchPoints[]
The
IPoints
instance to copy.
Returns ITouchPoints[]
A copy of the points.
getDeltaDistance
getDeltaDistanceBetweenIPoints
getDeltaPoints
getDeltaRotation
Parameters
currentPoints: ITouchPoints[]
lastPoints: ITouchPoints[]
Returns void
getMeanPoints
getMeanTouchPoints
Parameters
points: ITouchPoints[]
Returns ITouchPoints
viewport
jumpToSlice
isViewportPreScaled
Parameters
viewport: default | default
targetId: string
Returns boolean
jumpToWorld
Uses the viewport’s current camera to jump to a specific world coordinate
Parameters
viewport: default
jumpWorld: Point3
location in the world to jump to
Returns true | undefined
True if successful
viewportFilters
filterViewportsWithFrameOfReferenceUID
Given an array of viewports, returns a list of viewports that are viewing a world space with the given
FrameOfReferenceUID
.Parameters
viewports: IViewport[]
An array of viewports.
FrameOfReferenceUID: string
The UID defining a particular world space/Frame Of Reference.
Returns (Types.IStackViewport | Types.IVolumeViewport)[]
A filtered array of viewports.
filterViewportsWithParallelNormals
It filters the viewports that are looking in the same view as the camera It basically checks if the viewPlaneNormal is parallel to the camera viewPlaneNormal
Parameters
viewports: any
Array of viewports to filter
camera: any
Camera to compare against
EPS: number = 0.999
Returns any
- Array of viewports with the same view
filterViewportsWithToolEnabled
Given an array of viewports, returns a list of viewports that have the the specified tool enabled.
Parameters
viewports: IViewport[]
An array of viewports.
toolName: string
The name of the tool to filter on.
Returns (Types.IStackViewport | Types.IVolumeViewport)[]
A filtered array of viewports.
getViewportIdsWithToolToRender
Given a cornerstone3D enabled
element
, and atoolName
, find all viewportIds looking at the same Frame Of Reference that have the tool with the giventoolName
active, passive or enabled.Parameters
element: HTMLDivElement
The target cornerstone3D enabled element.
toolName: string
The string toolName.
requireParallelNormals: boolean = true
If true, only return viewports that have parallel normals.
Returns string[]
An array of viewportIds.
voi
colorbar
Enums
ColorbarRangeTextPosition
Specify the position of the text/ticks. Left/Right are the valid options for a vertical colorbars and Top/Bottom for the horizontal ones.
Bottom
Left
Right
Top
Types
ColorbarCommonProps
Type declaration
optionalimageRange?: ColorbarImageRange
optionalshowFullPixelValueRange?: boolean
optionalticks?: { position?: ColorbarRangeTextPosition; style?: ColorbarTicksStyle }
optionalposition?: ColorbarRangeTextPosition
optionalstyle?: ColorbarTicksStyle
optionalvoiRange?: ColorbarVOIRange
ColorbarImageRange
Type declaration
lower: number
upper: number
ColorbarProps
ColorbarSize
Type declaration
height: number
width: number
ColorbarTicksProps
ColorbarTicksStyle
Type declaration
optionalcolor?: string
optionalfont?: string
optionallabelMargin?: number
optionalmaxNumTicks?: number
optionaltickSize?: number
optionaltickWidth?: number
ColorbarVOIRange
ViewportColorbarProps
Colorbar
A base colorbar class that is not associated with any viewport. It is possible to click and drag to change the VOI range, shows the ticks during interaction and it can show full image range or VOI range.
constructor
Parameters
props: ColorbarProps
Returns Colorbar
publicactiveColormapName
Returns the active LUT name
Returns string
Set the current active LUT name and re-renders the color bar
Parameters
colormapName: string
Returns void
publicid
Widget id
Returns string
publicimageRange
Returns ColorbarImageRange
Parameters
imageRange: ColorbarImageRange
Returns void
publicrootElement
Widget’s root element
Returns HTMLElement
publicshowFullImageRange
Returns boolean
Parameters
value: boolean
Returns void
publicvoiRange
Returns ColorbarImageRange
Parameters
voiRange: ColorbarImageRange
Returns void
public_createTicksBar
Parameters
props: ColorbarProps
Returns ColorbarTicks
publicappendTo
Append the widget to a parent element
Parameters
container: HTMLElement
HTML element where the widget should be added to
Returns void
publicdestroy
Returns void
ViewportColorbar
A colorbar associated with a viewport that updates automatically when the viewport VOI changes or when the stack/volume are updated..
constructor
Parameters
props: ViewportColorbarProps
Returns ViewportColorbar
publicactiveColormapName
Returns the active LUT name
Returns string
Set the current active LUT name and re-renders the color bar
Parameters
colormapName: string
Returns void
publicelement
Returns HTMLDivElement
publicenabledElement
Returns IEnabledElement
publicid
Widget id
Returns string
publicimageRange
Returns ColorbarImageRange
Parameters
imageRange: ColorbarImageRange
Returns void
publicrootElement
Widget’s root element
Returns HTMLElement
publicshowFullImageRange
Returns boolean
Parameters
value: boolean
Returns void
publicvoiRange
Returns ColorbarImageRange
Parameters
voiRange: ColorbarImageRange
Returns void
public_createTicksBar
Parameters
props: ColorbarProps
Returns ColorbarTicks
publicappendTo
Append the widget to a parent element
Parameters
container: HTMLElement
HTML element where the widget should be added to
Returns void
publicdestroy
Returns void
windowLevel
calculateMinMaxMean
Parameters
pixelLuminance: any
globalMin: any
globalMax: any
Returns { max: any; mean: number; min: any }
max: any
mean: number
min: any
extractWindowLevelRegionToolData
Parameters
viewport: any
Returns { color: any; columns: any; height: any; maxPixelValue: number; minPixelValue: number; rows: any; scalarData: any; width: any }
color: any
columns: any
height: any
maxPixelValue: number
minPixelValue: number
rows: any
scalarData: any
width: any
getLuminanceFromRegion
Extracts the luminance values from a specified region of an image.
Parameters
imageData: any
The image data object containing pixel information.
x: any
The x-coordinate of the top-left corner of the region.
y: any
The y-coordinate of the top-left corner of the region.
width: any
The width of the region.
height: any
The height of the region.
Returns any[]
An array containing the luminance values of the specified region.
Classes
annotationFrameRange
This class handles the annotation frame range values for multiframes. Mostly used for the Video viewport, it allows references to a range of frame values.
constructor
Returns default
publicstaticframesToString
Parameters
range: any
Returns string
publicstaticgetFrameRange
Parameters
annotation: Annotation
Returns number | [number, number]
publicstaticsetFrameRange
Sets the range of frames to associate with the given annotation. The range can be a single frame number (1 based according to DICOM), or a range of values in the format
min-max
where min, max are inclusive Modifies the referencedImageID to specify the updated URL.Parameters
annotation: Annotation
range: string | FramesRange
optionaleventBase: { renderingEngineId: any; viewportId: any }
Returns void
Variables
conststackContextPrefetch
Type declaration
disable: (element: any) => void
Parameters
element: any
Returns void
enable: (element: any) => void
Call this to enable stack context sensitive prefetch. Should be called before stack data is set in order to start prefetch after load first image. This will add a STACK_NEW_IMAGE to detect when a new image is displayed, and then update the prefetch stack. The context sensitive prefetch reacts to the initial display, or significant moves, the already loaded images, the cache size and the direction of navigation. The behaviour is:
- On navigating to a new image initially, or one that is at a different position:
- Fetch the next/previous 2 images
- If the user is navigating forward/backward by less than 5 images, then
- Prefetch additional images in the direction of navigation, up to 100
- If all the images in a given prefetch have completed, then:
- Use the last prefetched image size as an image size for the stack
- Fetch up to 1/4 of the cache size images near the current image
This is designed to:
- Get nearby images immediately so that they are available for navigation
- Under the assumption that users might click and view an image, then navigate to next/previous image to see the exact image they want
- Not interfere with loading other viewports if they are still loading
- Load priority is prefetch, and minimal images are requested initially
- Load an entire series if it will fit in memory
- Allows navigating to other parts of the series and display images immediately
- Have images available for CINE/navigation in one direction even when
there is more image data than will fit in memory.
- Up to 100 images in the direction of travel will be prefetched
Parameters
element: any
to prefetch on
Returns void
getConfiguration: () => { directionExtraImages: number; maxAfter: number; maxImagesToPrefetch: number; minBefore: number; preserveExistingPool: boolean }
Returns { directionExtraImages: number; maxAfter: number; maxImagesToPrefetch: number; minBefore: number; preserveExistingPool: boolean }
directionExtraImages: number
maxAfter: number
maxImagesToPrefetch: number
minBefore: number
preserveExistingPool: boolean
setConfiguration: (config: any) => void
Parameters
config: any
Returns void
conststackPrefetch
Type declaration
disable: (element: any) => void
Parameters
element: any
Returns void
enable: (element: any) => void
Parameters
element: any
Returns void
getConfiguration: () => { maxImagesToPrefetch: number; preserveExistingPool: boolean }
Returns { maxImagesToPrefetch: number; preserveExistingPool: boolean }
maxImagesToPrefetch: number
preserveExistingPool: boolean
setConfiguration: (config: any) => void
Parameters
config: any
Returns void
Functions
calibrateImageSpacing
It adds the provided spacing to the Cornerstone internal calibratedPixelSpacing metadata provider, then it invalidates all the tools that have the imageId as their reference imageIds. Finally, it triggers a re-render for invalidated annotations.
Parameters
imageId: string
ImageId for the calibrated image
renderingEngine: default
calibrationOrScale: number | IImageCalibration
either the calibration object or a scale value
Returns void
publicclip
Clips a value to an upper and lower bound.
Parameters
val: number
The value to clip.
low: number
The lower bound.
high: number
The upper bound.
Returns number
The clipped value.
debounce
Creates a debounced function that delays invoking
func
until afterwait
milliseconds have elapsed since the last time the debounced function was invoked, or until the next browser frame is drawn. The debounced function comes with acancel
method to cancel delayedfunc
invocations and aflush
method to immediately invoke them. Provideoptions
to indicate whetherfunc
should be invoked on the leading and/or trailing edge of thewait
timeout. Thefunc
is invoked with the last arguments provided to the debounced function. Subsequent calls to the debounced function return the result of the lastfunc
invocation.Note: If
leading
andtrailing
options aretrue
,func
is invoked on the trailing edge of the timeout only if the debounced function is invoked more than once during thewait
timeout.If
wait
is0
andleading
isfalse
,func
invocation is deferred until the next tick, similar tosetTimeout
with a timeout of0
.If
wait
is omitted in an environment withrequestAnimationFrame
,func
invocation will be deferred until the next frame is drawn (typically about 16ms).See David Corbacho’s article for details over the differences between
debounce
andthrottle
.Parameters
func: Function
The function to debounce.
optionalwait: number
The number of milliseconds to delay; if omitted,
requestAnimationFrame
is used (if available).optionaloptions: { leading: boolean; maxWait: number; trailing: boolean }
The options object.
Returns Function
Returns the new debounced function.
getAnnotationNearPoint
Get the annotation that is close to the provided canvas point, it will return the first annotation that is found.
Parameters
element: HTMLDivElement
The element to search for an annotation on.
canvasPoint: Point2
The canvasPoint on the page where the user clicked.
proximity: number = 5
The distance from the canvasPoint to the annotation.
Returns Annotation | null
The annotation for the element
getAnnotationNearPointOnEnabledElement
“Find the annotation near the point on the enabled element.” it will return the first annotation that is found.
Parameters
enabledElement: IEnabledElement
The element that is currently active.
point: Point2
The point to search near.
proximity: number
The distance from the point that the annotation must be within.
Returns Annotation | null
A Annotation object.
getCalibratedAspect
Gets the aspect ratio of the screen display relative to the image display in order to square up measurement values. That is, suppose the spacing on the image is 1, 0.5 (x,y spacing) This is displayed at 1, 1 spacing on screen, then the aspect value will be 1/0.5 = 2
Parameters
image: any
Returns any
getCalibratedLengthUnitsAndScale
Extracts the calibrated length units, area units, and the scale for converting from internal spacing to image spacing.
Parameters
image: any
to extract the calibration from
handles: any
to detect if spacing information is different between points
Returns { areaUnits: string; scale: number; units: string }
Object containing the units, area units, and scale
areaUnits: string
scale: number
units: string
getCalibratedProbeUnitsAndValue
Parameters
image: any
handles: any
Returns { calibrationType: undefined; units: string[]; values: any[] } | { calibrationType: string; units: string[]; values: any[] }
getSphereBoundsInfo
Given an imageData, and the great circle top and bottom points of a sphere, this function will run the callback for each point of the imageData that is within the sphere defined by the great circle points. If the viewport is provided, region of interest will be an accurate approximation of the sphere (using viewport camera), and the resulting performance will be better.
Parameters
circlePoints: [Point3, Point3]
bottom and top points of the great circle in world coordinates
imageData: vtkImageData
The volume imageData
viewport: any
Returns { bottomRightWorld: Types.Point3; boundsIJK: BoundsIJK; centerWorld: Types.Point3; radiusWorld: number; topLeftWorld: Types.Point3 }
bottomRightWorld: Types.Point3
boundsIJK: BoundsIJK
centerWorld: Types.Point3
radiusWorld: number
topLeftWorld: Types.Point3
getViewportForAnnotation
Finds a matching viewport in terms of the orientation of the annotation data and the frame of reference. This doesn’t mean the annotation IS being displayed in the viewport, just that it could be by navigating the slice, and/or pan/zoom, without changing the orientation.
Parameters
annotation: Annotation
to find a viewport that it could display in
Returns default | default
The viewport to display in
isObject
Checks if
value
is the language type ofObject
. (e.g. arrays, functions, objects, regexes,new Number(0)
, andnew String('')
)Parameters
value: any
The value to check.
Returns boolean
Returns
true
ifvalue
is an object, elsefalse
.
jumpToSlice
It uses the imageIndex in the Options to scroll to the slice that is intended. It works for both Stack and Volume viewports. In VolumeViewports, the imageIndex should be given with respect to the index in the 3D image in the view direction (i.e. the index of the slice in Axial, Sagittal, Coronal, or Oblique).
Parameters
element: HTMLDivElement
the HTML Div element scrolling inside
options: JumpToSliceOptions = ...
the options used for jumping to a slice
Returns Promise<void>
Promise that resolves to ImageIdIndex
pointInShapeCallback
For each point in the image (If boundsIJK is not provided, otherwise, for each point in the provided bounding box), It runs the provided callback IF the point passes the provided criteria to be inside the shape (which is defined by the provided pointInShapeFn)
Parameters
imageData: vtkImageData | CPUImageData
The image data object.
pointInShapeFn: ShapeFnCriteria
A function that takes a point in LPS space and returns true if the point is in the shape and false if it is not.
optionalcallback: PointInShapeCallback
A function that will be called for every point in the shape.
optionalboundsIJK: BoundsIJK
The bounds of the volume in IJK coordinates.
Returns PointInShape[]
pointInSurroundingSphereCallback
Given an imageData, and the great circle top and bottom points of a sphere, this function will run the callback for each point of the imageData that is within the sphere defined by the great circle points. If the viewport is provided, region of interest will be an accurate approximation of the sphere (using viewport camera), and the resulting performance will be better.
Parameters
imageData: vtkImageData
The volume imageData
circlePoints: [Point3, Point3]
bottom and top points of the great circle in world coordinates
callback: PointInShapeCallback
A callback function that will be called for each point in the shape.
optionalviewport: default
Returns void
pointToString
Parameters
point: any
decimals: number = 5
Returns string
roundNumber
Parameters
value: string | number | (string | number)[]
optionalprecision: number
Returns string
scroll
It scrolls one slice in the Stack or Volume Viewport, it uses the options provided to determine the slice to scroll to. For Stack Viewport, it scrolls in the 1 or -1 direction, for Volume Viewport, it uses the camera and focal point to determine the slice to scroll to based on the spacings.
Parameters
viewport: IViewport
The viewport in which to scroll
options: ScrollOptions
Options to use for scrolling, including direction, invert, and volumeId
Returns void
throttle
Creates a throttled function that only invokes
func
at most once per everywait
milliseconds (or once per browser frame). The throttled function comes with acancel
method to cancel delayedfunc
invocations and aflush
method to immediately invoke them. Provideoptions
to indicate whetherfunc
should be invoked on the leading and/or trailing edge of thewait
timeout. Thefunc
is invoked with the last arguments provided to the throttled function. Subsequent calls to the throttled function return the result of the lastfunc
invocation.Note: If
leading
andtrailing
options aretrue
,func
is invoked on the trailing edge of the timeout only if the throttled function is invoked more than once during thewait
timeout.If
wait
is0
andleading
isfalse
,func
invocation is deferred until the next tick, similar tosetTimeout
with a timeout of0
.If
wait
is omitted in an environment withrequestAnimationFrame
,func
invocation will be deferred until the next frame is drawn (typically about 16ms).See David Corbacho’s article for details over the differences between
throttle
anddebounce
.Parameters
func: Function
The function to throttle.
optionalwait: number
The number of milliseconds to throttle invocations to; if omitted,
requestAnimationFrame
is used (if available).optionaloptions: { leading: boolean; trailing: boolean }
The options object.
Returns Function
Returns the new throttled function.
triggerAnnotationRender
It triggers the rendering of the annotations for the given HTML element using the
AnnotationRenderingEngine
Parameters
element: HTMLDivElement
The element to render the annotation on.
Returns void
triggerAnnotationRenderForToolGroupIds
Triggers annotation rendering for the specified tool group IDs.
Parameters
toolGroupIds: string[]
An array of tool group IDs.
Returns void
triggerAnnotationRenderForViewportIds
Parameters
renderingEngine: default
viewportIdsToRender: string[]
Returns void
triggerEvent
Parameters
el: EventTarget
type: string
optionaldetail: unknown
Returns boolean
Uses the current bounds of the 2D rectangle and extends it in the view axis by numSlices It compares min and max of each IJK to find the view axis (for axial, zMin === zMax) and then calculates the extended range. It will assume the slice is relative to the current slice and will add the given slices to the current max of the boundingBox.