Skip to main content

utilities

Index

Namespaces

boundingBox

boundingBox:

getBoundingBoxAroundShape

Renames and re-exports getBoundingBoxAroundShapeIJK

extend2DBoundingBoxInViewAxis

  • extend2DBoundingBoxInViewAxis(boundsIJK: [Point2, Point2, Point2], numSlicesToProject: number): [Types.Point2, Types.Point2, Types.Point2]
  • Uses the current bounds of the 2D rectangle and extends it in the view axis by numSlices It compares min and max of each IJK to find the view axis (for axial, zMin === zMax) and then calculates the extended range. It will assume the slice is relative to the current slice and will add the given slices to the current max of the boundingBox.


    Parameters

    • boundsIJK: [Point2, Point2, Point2]

      [[iMin, iMax], [jMin, jMax], [kMin, kMax]]

    • numSlicesToProject: number

    Returns [Types.Point2, Types.Point2, Types.Point2]

    extended bounds

getBoundingBoxAroundShapeIJK

  • getBoundingBoxAroundShapeIJK(points: Point2[] | Point3[], dimensions?: Point2 | Point3): BoundingBox
  • With a given vertices (points) coordinates in 2D or 3D in IJK, it calculates the minimum and maximum coordinate in each axis, and returns them. If clipBounds are provided it also clip the min, max to the provided width, height and depth


    Parameters

    • points: Point2[] | Point3[]

      shape corner points coordinates either in IJK (image coordinate)

    • optionaldimensions: Point2 | Point3

      bounds to clip the min, max

    Returns BoundingBox

    [[xMin,xMax],[yMin,yMax], [zMin,zMax]]

getBoundingBoxAroundShapeWorld

  • getBoundingBoxAroundShapeWorld(points: Point2[] | Point3[], clipBounds?: Point2 | Point3): BoundingBox
  • With a given vertices (points) coordinates in 2D or 3D in World Coordinates, it calculates the minimum and maximum coordinate in each axis, and returns them. If clipBounds are provided it also clip the min, max to the provided width, height and depth


    Parameters

    • points: Point2[] | Point3[]

      shape corner points coordinates either in IJK (image coordinate)

    • optionalclipBounds: Point2 | Point3

      bounds to clip the min, max

    Returns BoundingBox

    [[xMin,xMax],[yMin,yMax], [zMin,zMax]]

cine

cine:

Events

Events:

CINE Tool Events

CLIP_STARTED

CLIP_STARTED: CORNERSTONE_CINE_TOOL_STARTED

CLIP_STOPPED

CLIP_STOPPED: CORNERSTONE_CINE_TOOL_STOPPED

addToolState

  • addToolState(element: HTMLDivElement, data: ToolData): void
  • Parameters

    • element: HTMLDivElement
    • data: ToolData

    Returns void

getToolState

  • getToolState(element: HTMLDivElement): CINETypes.ToolData | undefined
  • Parameters

    • element: HTMLDivElement

    Returns CINETypes.ToolData | undefined

playClip

  • playClip(element: HTMLDivElement, playClipOptions: PlayClipOptions): void
  • Starts playing a clip or adjusts the frame rate of an already playing clip. framesPerSecond is optional and defaults to 30 if not specified. A negative framesPerSecond will play the clip in reverse. The element must be a stack of images


    Parameters

    • element: HTMLDivElement

      HTML Element

    • playClipOptions: PlayClipOptions

    Returns void

stopClip

  • stopClip(element: HTMLDivElement, options?: any): void
  • Stops an already playing clip.


    Parameters

    • element: HTMLDivElement

      HTML Element

    • options: any = ...

    Returns void

contourSegmentation

contourSegmentation:

addContourSegmentationAnnotation

  • Adds a contour segmentation annotation to the specified segmentation.


    Parameters

    Returns void

areSameSegment

  • Check if two contour segmentations are from same segmentId, segmentationRepresentationUID and segmentIndex.


    Parameters

    Returns boolean

    True if they are from same segmentId, segmentationRepresentationUID and segmentIndex or false otherwise.

isContourSegmentationAnnotation

removeContourSegmentationAnnotation

  • Removes a contour segmentation annotation from the given annotation. If the annotation does not have a segmentation data, this method returns quietly. This can occur for interpolated segmentations that have not yet been converted to real segmentations or other in-process segmentations.


    Parameters

    Returns void

contours

contours:

interpolation

interpolation:

InterpolationManager

InterpolationManager:

constructor

  • new InterpolationManager(): default
  • Returns default

statictoolNames

toolNames: any[] = []

staticacceptAutoGenerated

  • acceptAutoGenerated(annotationGroupSelector: AnnotationGroupSelector, selector?: AcceptInterpolationSelector): void
  • Accepts the autogenerated interpolations, marking them as non-autogenerated. Can provide a selector to choose which ones to accept.

    Rules for which items to select:

    1. Only choose annotations having the same segment index and segmentationID
    2. Exclude all contours having the same interpolation UID as any other contours on the same slice.
    3. Exclude autogenerated annotations
    4. Exclude any reset interpolationUIDs (this is a manual operation to allow creating a new interpolation)
    5. Find the set of interpolationUID’s remaining a. If the set is of size 0, assign a new interpolationUID b. If the set is of size 1, assign that interpolationUID c. Otherwise (optional, otherwise do b for size>1 randomly), for every remaining annotation, find the one whose center point is closest to the center point of the new annotation. Choose that interpolationUID

    To allow creating new interpolated groups, the idea is to just use a new segment index, then have an operation to update the segment index of an interpolation set. That way the user can easily draw/see the difference, and then merge them as required. However, the base rules allow creating two contours on a single image to create a separate set.


    Parameters

    Returns void

staticaddTool

  • addTool(toolName: string): void
  • Parameters

    • toolName: string

    Returns void

statichandleAnnotationCompleted

  • handleAnnotationCompleted(evt: AnnotationCompletedEventType): void
  • When an annotation is completed, if the configuration includes interpolation, then find matching interpolations and interpolation between this segmentation and the other segmentations of the same type.


    Parameters

    • evt: AnnotationCompletedEventType

    Returns void

statichandleAnnotationDelete

  • handleAnnotationDelete(evt: AnnotationRemovedEventType): void
  • Delete interpolated annotations when their endpoints are deleted.


    Parameters

    • evt: AnnotationRemovedEventType

    Returns void

statichandleAnnotationUpdate

  • handleAnnotationUpdate(evt: AnnotationModifiedEventType): void
  • This method gets called when an annotation changes. It will then trigger related already interpolated annotations to be updated with the modified data.


    Parameters

    • evt: AnnotationModifiedEventType

    Returns void

AnnotationToPointData

AnnotationToPointData:

constructor

  • new AnnotationToPointData(): AnnotationToPointData
  • Returns AnnotationToPointData

staticTOOL_NAMES

TOOL_NAMES: Record<string, any> = {}

staticconvert

  • convert(annotation: any, index: any, metadataProvider: any): { ContourSequence: any; ROIDisplayColor: number[]; ReferencedROINumber: any }
  • Parameters

    • annotation: any
    • index: any
    • metadataProvider: any

    Returns { ContourSequence: any; ROIDisplayColor: number[]; ReferencedROINumber: any }

    • ContourSequence: any
    • ROIDisplayColor: number[]
    • ReferencedROINumber: any

staticregister

  • register(toolClass: any): void
  • Parameters

    • toolClass: any

    Returns void

contourFinder

contourFinder: { findContours: (lines: any) => any; findContoursFromReducedSet: (lines: any) => any }

Type declaration

  • findContours: (lines: any) => any
      • (lines: any): any
      • Parameters

        • lines: any

        Returns any

  • findContoursFromReducedSet: (lines: any) => any
      • (lines: any): any
      • Parameters

        • lines: any

        Returns any

detectContourHoles

detectContourHoles: { processContourHoles: (contours: any, points: any, useXOR?: boolean) => any }

Type declaration

  • processContourHoles: (contours: any, points: any, useXOR?: boolean) => any
      • (contours: any, points: any, useXOR?: boolean): any
      • Check if contours have holes, if so update contour accordingly


        Parameters

        • contours: any
        • points: any
        • useXOR: boolean = true

        Returns any

acceptAutogeneratedInterpolations

  • acceptAutogeneratedInterpolations(annotationGroupSelector: AnnotationGroupSelector, selector: AcceptInterpolationSelector): void
  • Accepts interpolated annotations, marking them as autoGenerated false.


    Parameters

    • annotationGroupSelector: AnnotationGroupSelector

      viewport or FOR to select annotations on

    • selector: AcceptInterpolationSelector

      nested selection criteria

    Returns void

areCoplanarContours

  • Check if two contour segmentation annotations are coplanar.

    A plane may be represented by a normal and a distance then to know if they are coplanar we need to:

    • check if the normals of the two annotations are pointing to the same direction or to opposite directions (dot product equal to 1 or -1 respectively)
    • Get one point from each polyline and project it onto the normal to get the distance from the origin (0, 0, 0).

    Parameters

    Returns boolean

calculatePerimeter

  • calculatePerimeter(polyline: number[][], closed: boolean): number
  • Calculates the perimeter of a polyline.


    Parameters

    • polyline: number[][]

      The polyline represented as an array of points.

    • closed: boolean

      Indicates whether the polyline is closed or not.

    Returns number

    The perimeter of the polyline.

findHandlePolylineIndex

  • Finds the index in the polyline of the specified handle. If the handle doesn’t match a polyline point, then finds the closest polyline point.

    Assumes polyline is in the same orientation as the handles.


    Parameters

    • annotation: ContourAnnotation

      to find the polyline and handles in

    • handleIndex: number

      the index of hte handle to look for. Negative values are treated relative to the end of the handle index.

    Returns number

    Index in polyline of the closest handle * 0 for handleIndex 0 * length for handleIndex===handles length

generateContourSetsFromLabelmap

  • generateContourSetsFromLabelmap(__namedParameters: Object): any[]
  • Parameters

    • __namedParameters: Object

    Returns any[]

getContourHolesDataCanvas

  • getContourHolesDataCanvas(annotation: Annotation, viewport: IViewport): Types.Point2[][]
  • Get the polylines for the child annotations (holes)


    Parameters

    • annotation: Annotation

      Annotation

    • viewport: IViewport

      Viewport used to convert the points from world to canvas space

    Returns Types.Point2[][]

    An array that contains all child polylines

getContourHolesDataWorld

  • getContourHolesDataWorld(annotation: Annotation): Types.Point3[][]
  • Get child polylines data in world space for contour annotations that represent the holes


    Parameters

    Returns Types.Point3[][]

    An array that contains all child polylines (holes) in world space

getDeduplicatedVTKPolyDataPoints

  • getDeduplicatedVTKPolyDataPoints(polyData: any, bypass?: boolean): { lines: { a: any; b: any }[]; points: any[] }
  • Iterate through polyData from vtkjs and merge any points that are the same then update merged point references within lines array


    Parameters

    • polyData: any

      vtkPolyData

    • bypass: boolean = false

      bypass the duplicate point removal

    Returns { lines: { a: any; b: any }[]; points: any[] }

    the updated polyData

    • lines: { a: any; b: any }[]
    • points: any[]

updateContourPolyline

  • updateContourPolyline(annotation: ContourAnnotation, polylineData: { closed?: boolean; points: Point2[]; targetWindingDirection?: ContourWindingDirection }, transforms: { canvasToWorld: (point: Point2) => Point3 }, options?: { decimate?: { enabled?: boolean; epsilon?: number } }): void
  • Update the contour polyline data


    Parameters

    • annotation: ContourAnnotation

      Contour annotation

    • polylineData: { closed?: boolean; points: Point2[]; targetWindingDirection?: ContourWindingDirection }

      Polyline data (points, winding direction and closed)

    • transforms: { canvasToWorld: (point: Point2) => Point3 }

      Methods to convert points to/from canvas and world spaces

    • optionaloptions: { decimate?: { enabled?: boolean; epsilon?: number } }

      Options

      • decimate: allow to set some parameters to decimate the polyline reducing the amount of points stored which also affects how fast it will draw the annotation in a viewport, compute the winding direction, append/remove contours and create holes. A higher epsilon value results in a polyline with less points.

    Returns void

drawing

drawing:

getTextBoxCoordsCanvas

  • getTextBoxCoordsCanvas(annotationCanvasPoints: Point2[]): Types.Point2
  • Determine the coordinates that will place the textbox to the right of the annotation.


    Parameters

    • annotationCanvasPoints: Point2[]

      The canvas points of the annotation’s handles.

    Returns Types.Point2

    • The coordinates for default placement of the textbox.

dynamicVolume

dynamicVolume:

generateImageFromTimeData

  • generateImageFromTimeData(dynamicVolume: IDynamicImageVolume, operation: string, frameNumbers?: number[]): Float32Array
  • Gets the scalar data for a series of time frames from a 4D volume, returns an array of scalar data after performing AVERAGE, SUM or SUBTRACT to be used to create a 3D volume


    Parameters

    • dynamicVolume: IDynamicImageVolume
    • operation: string

      operation to perform on time frame data, operations include SUM, AVERAGE, and SUBTRACT (can only be used with 2 time frames provided)

    • optionalframeNumbers: number[]

      an array of frame indices to perform the operation on, if left empty, all frames will be used

    Returns Float32Array

getDataInTime

  • getDataInTime(dynamicVolume: IDynamicImageVolume, options: { frameNumbers?: any; imageCoordinate?: any; maskVolumeId?: any }): number[] | number[][]
  • Gets the scalar data for a series of time points for either a single coordinate or a segmentation mask, it will return the an array of scalar data for a single coordinate or an array of arrays for a segmentation.


    Parameters

    • dynamicVolume: IDynamicImageVolume

      4D volume to compute time point data from

    • options: { frameNumbers?: any; imageCoordinate?: any; maskVolumeId?: any }

      frameNumbers: which frames to use as timepoints, if left blank, gets data timepoints over all frames maskVolumeId: segmentationId to get timepoint data of imageCoordinate: world coordinate to get timepoint data of

    Returns number[] | number[][]

math

math:

BasicStatsCalculator

BasicStatsCalculator:

BasicStatsCalculator

BasicStatsCalculator:

constructor

  • new BasicStatsCalculator(): default
  • Returns default

staticrun

run: (__namedParameters: Object) => void

Type declaration

    • (__namedParameters: Object): void
    • Parameters

      • __namedParameters: Object

      Returns void

staticgetStatistics

  • Basic function that calculates statictics for a given array of points.


    Returns NamedStatistics

    An object that contains : max : The maximum value of the array mean : mean of the array stdDev : standard deviation of the array stdDevWithSumSquare : standard deviation of the array using sum² array : An array of hte above values, in order.

staticstatsCallback

  • statsCallback(value: Object): void
  • This callback is used when we verify if the point is in the annotion drawn so we can get every point in the shape to calculate the statistics


    Parameters

    • value: Object

      of the point in the shape of the annotation

    Returns void

abstractCalculator

Calculator:

constructor

  • new Calculator(): Calculator
  • Returns Calculator

staticgetStatistics

getStatistics: () => NamedStatistics

Type declaration

staticrun

run: (__namedParameters: Object) => void

Type declaration

    • (__namedParameters: Object): void
    • Parameters

      • __namedParameters: Object

      Returns void

aabb

aabb:

distanceToPoint

  • distanceToPoint(aabb: AABB2, point: Point2): number
  • Calculates the squared distance of a point to an AABB using 2D Box SDF (Signed Distance Field)

    The SDF of a Box https://www.youtube.com/watch?v=62-pRVZuS5c


    Parameters

    • aabb: AABB2

      Axis-aligned bound box (minX, minY, maxX and maxY)

    • point: Point2

      2D point

    Returns number

    The squared distance between the 2D point and the AABB

distanceToPointSquared

  • distanceToPointSquared(aabb: AABB2, point: Point2): number
  • Calculates the distance of a point to an AABB using 2D Box SDF (Signed Distance Field)

    The SDF of a Box https://www.youtube.com/watch?v=62-pRVZuS5c


    Parameters

    • aabb: AABB2

      Axis-aligned bound box

    • point: Point2

      2D point

    Returns number

    The closest distance between the 2D point and the AABB

intersectAABB

  • intersectAABB(aabb1: AABB2, aabb2: AABB2): boolean
  • Check if two axis-aligned bounding boxes intersect


    Parameters

    • aabb1: AABB2

      First AABB

    • aabb2: AABB2

      Second AABB

    Returns boolean

    True if they intersect or false otherwise

ellipse

ellipse:

getCanvasEllipseCorners

  • It takes the canvas coordinates of the ellipse corners and returns the top left and bottom right corners of it


    Parameters

    Returns Types.Point2[]

    An array of two points.

pointInEllipse

  • pointInEllipse(ellipse: any, pointLPS: any, inverts?: Inverts): boolean
  • Given an ellipse and a point, return true if the point is inside the ellipse


    Parameters

    • ellipse: any

      The ellipse object to check against.

    • pointLPS: any

      The point in LPS space to test.

    • inverts: Inverts = {}

      An object to cache the inverted radius squared values, if you are testing multiple points against the same ellipse then it is recommended to pass in the same object to cache the values. However, there is a simpler way to do this by passing in the fast flag as true, then on the first iteration the values will be cached and on subsequent iterations the cached values will be used.

    Returns boolean

    A boolean value.

precalculatePointInEllipse

  • precalculatePointInEllipse(ellipse: any, inverts?: Inverts): Inverts
  • This will perform some precalculations to make things faster. Ideally, use the ‘precalculated’ function inside inverts to call the test function. This minimizes re-reading of variables and only needs the LPS passed each time. That is:

       const inverts = precalculatePointInEllipse(ellipse);
    if( inverts.precalculated(pointLPS) ) ...

    Parameters

    • ellipse: any
    • inverts: Inverts = {}

    Returns Inverts

lineSegment

lineSegment:

distanceToPoint

  • distanceToPoint(lineStart: Point2, lineEnd: Point2, point: Point2): number
  • Calculates the distance of a point to a line


    Parameters

    • lineStart: Point2

      x,y coordinates of the start of the line

    • lineEnd: Point2

      x,y coordinates of the end of the line

    • point: Point2

      x,y of the point

    Returns number

    distance

distanceToPointSquared

  • distanceToPointSquared(lineStart: Point2, lineEnd: Point2, point: Point2): number
  • Calculates the distance-squared of a point to a line segment


    Parameters

    • lineStart: Point2

      x,y coordinates of the start of the line

    • lineEnd: Point2

      x,y coordinates of the end of the line

    • point: Point2

      x,y of the point

    Returns number

    distance-squared

distanceToPointSquaredInfo

  • distanceToPointSquaredInfo(lineStart: Point2, lineEnd: Point2, point: Point2): { distanceSquared: number; point: Types.Point2 }
  • Calculate the closest point and the squared distance between a reference point and a line segment.

    It projects the reference point onto the line segment but it shall be bounded by the start/end points since this is a line segment and not a line which could be extended.


    Parameters

    • lineStart: Point2

      Start point of the line segment

    • lineEnd: Point2

      End point of the line segment

    • point: Point2

      Reference point

    Returns { distanceSquared: number; point: Types.Point2 }

    Closest point and the squared distance between a point and a line segment defined by lineStart and lineEnd points

    • distanceSquared: number
    • point: Types.Point2

intersectLine

  • intersectLine(line1Start: Point2, line1End: Point2, line2Start: Point2, line2End: Point2): number[]
  • Calculates the intersection point between two lines in the 2D plane


    Parameters

    • line1Start: Point2

      x,y coordinates of the start of the first line

    • line1End: Point2

      x,y coordinates of the end of the first line

    • line2Start: Point2

      x,y coordinates of the start of the second line

    • line2End: Point2

      x,y coordinates of the end of the second line

    Returns number[]

    [x,y] - point x,y of the point

isPointOnLineSegment

  • isPointOnLineSegment(lineStart: Point2, lineEnd: Point2, point: Point2): boolean
  • Test if a point is on a line segment


    Parameters

    • lineStart: Point2

      Line segment start point

    • lineEnd: Point2

      Line segment end point

    • point: Point2

      Point to test

    Returns boolean

    True if the point lies on the line segment or false otherwise

point

point:

distanceToPoint

  • distanceToPoint(p1: Point, p2: Point): number
  • Calculates the distance of a point to another point


    Parameters

    • p1: Point

      x,y or x,y,z of the point

    • p2: Point

      x,y or x,y,z of the point

    Returns number

    distance

distanceToPointSquared

  • distanceToPointSquared(p1: Point, p2: Point): number
  • Calculates the distance squared of a point to another point


    Parameters

    • p1: Point

      x,y or x,y,z of the point

    • p2: Point

      x,y or x,y,z of the point

    Returns number

    distance

mirror

  • mirror(mirrorPoint: Point2, staticPoint: Point2): Types.Point2
  • Get a mirrored point along the line created by two points where one of them is the static (“anchor”) point and the other one is the point to be mirroed.


    Parameters

    • mirrorPoint: Point2

      2D Point to be mirroed

    • staticPoint: Point2

      Static 2D point

    Returns Types.Point2

    Mirroed 2D point

polyline

polyline:

addCanvasPointsToArray

  • addCanvasPointsToArray(element: HTMLDivElement, canvasPoints: Point2[], newCanvasPoint: Point2, commonData: PlanarFreehandROICommonData): number
  • Adds one or more points to the array at a resolution defined by the underlying image.


    Parameters

    • element: HTMLDivElement
    • canvasPoints: Point2[]
    • newCanvasPoint: Point2
    • commonData: PlanarFreehandROICommonData

    Returns number

containsPoint

  • containsPoint(polyline: Point2[], point: Point2, options?: { closed?: boolean; holes?: Point2[][] }): boolean
  • Checks if a 2D point is inside the polyline.

    A point is inside a curve/polygon if the number of intersections between the horizontal ray emanating from the given point and to the right and the line segments is odd. https://www.eecs.umich.edu/courses/eecs380/HANDOUTS/PROJ2/InsidePoly.html

    Note that a point on the polyline is considered inside.


    Parameters

    • polyline: Point2[]

      Polyline points (2D)

    • point: Point2

      2D Point

    • options: { closed?: boolean; holes?: Point2[][] } = ...

    Returns boolean

    True if the point is inside the polyline or false otherwise

containsPoints

  • containsPoints(polyline: Point2[], points: Point2[]): boolean
  • Checks if a polyline contains a set of points.


    Parameters

    • polyline: Point2[]

      Polyline points (2D)

    • points: Point2[]

      2D points to verify

    Returns boolean

    True if all points are inside the polyline or false otherwise

decimate

  • decimate(polyline: Point2[], epsilon?: number): Point2[]

getAABB

  • getAABB(polyline: number[] | Point2[] | Point3[], options?: { numDimensions: number }): Types.AABB2 | Types.AABB3
  • Calculates the axis-aligned bounding box (AABB) of a polyline.


    Parameters

    • polyline: number[] | Point2[] | Point3[]

      The polyline represented as an array of points.

    • optionaloptions: { numDimensions: number }

      Additional options for calculating the AABB.

    Returns Types.AABB2 | Types.AABB3

    The AABB of the polyline. If the polyline represents points in 3D space, returns an AABB3 object with properties minX, maxX, minY, maxY, minZ, and maxZ. If the polyline represents points in 2D space, returns an AABB2 object with properties minX, maxX, minY, and maxY.

getArea

  • getArea(points: Point2[]): number
  • Calculates the area of an array of Point2 points using the shoelace algorithm.

    The units of the area are in the same units as the points are in. E.g. if the points are in canvas, then the result is in canvas pixels ^2; If they are in mm, then the result is in mm^2; etc.


    Parameters

    • points: Point2[]

    Returns number

getClosestLineSegmentIntersection

  • getClosestLineSegmentIntersection(points: Point2[], p1: Point2, q1: Point2, closed?: boolean): { distance: number; segment: Types.Point2 } | undefined
  • Checks whether the line (p1,q1) intersects any of the other lines in the points, and returns the closest value.


    Parameters

    • points: Point2[]

      Polyline points

    • p1: Point2

      Start point of the line segment

    • q1: Point2

      End point of the line segment

    • closed: boolean = true

      Test the intersection against the line that connects the first to the last when closed

    Returns { distance: number; segment: Types.Point2 } | undefined

    The closest line segment from polyline that intersects the line segment [p1, q1]

getFirstLineSegmentIntersectionIndexes

  • getFirstLineSegmentIntersectionIndexes(points: Point2[], p1: Point2, q1: Point2, closed?: boolean): Types.Point2 | undefined
  • Checks whether the line (p1,q1) intersects any of the other lines in the points, and returns the first value.


    Parameters

    • points: Point2[]

      Polyline points

    • p1: Point2

      First point of the line segment that is being tested

    • q1: Point2

      Second point of the line segment that is being tested

    • closed: boolean = true

      Test the intersection with the line segment that connects the last and first points of the polyline

    Returns Types.Point2 | undefined

    Indexes of the line segment points from the polyline that intersects [p1, q1]

getLineSegmentIntersectionsCoordinates

  • getLineSegmentIntersectionsCoordinates(points: Point2[], p1: Point2, q1: Point2, closed?: boolean): Types.Point2[]
  • Returns all intersections points between a line segment and a polyline


    Parameters

    • points: Point2[]
    • p1: Point2
    • q1: Point2
    • closed: boolean = true

    Returns Types.Point2[]

getLineSegmentIntersectionsIndexes

  • getLineSegmentIntersectionsIndexes(polyline: Point2[], p1: Point2, q1: Point2, closed?: boolean): Types.Point2[]
  • Get all intersections between a polyline and a line segment.


    Parameters

    • polyline: Point2[]

      Polyline points

    • p1: Point2

      Start point of line segment

    • q1: Point2

      End point of line segment

    • closed: boolean = true

      Test the intersection against the line segment that connects the last to the first point when set to true

    Returns Types.Point2[]

    Start/end point indexes of all line segments that intersect (p1, q1)

getNormal2

  • getNormal2(polyline: Point2[]): Types.Point3

getNormal3

  • getNormal3(polyline: Point3[]): Types.Point3
  • Calculate the normal of a 3D planar polyline


    Parameters

    • polyline: Point3[]

      Planar polyline in 3D space

    Returns Types.Point3

    Normal of the 3D planar polyline

getSignedArea

  • getSignedArea(polyline: Point2[]): number
  • Returns the area with signal of a 2D polyline https://www.youtube.com/watch?v=GpsKrAipXm8&amp;t=1900s

    This functions has a runtime very close to getArea and it is recommended to be called only if you need the area signal (eg: calculate polygon normal). If you do not need the area signal you should always call getArea.


    Parameters

    • polyline: Point2[]

      Polyline points (2D)

    Returns number

    Area of the polyline (with signal)

getSubPixelSpacingAndXYDirections

  • getSubPixelSpacingAndXYDirections(viewport: default | default, subPixelResolution: number): { spacing: Point2; xDir: Point3; yDir: Point3 }
  • Gets the desired spacing for points in the polyline for the PlanarFreehandROITool in the x and y canvas directions, as well as returning these canvas directions in world space.


    Parameters

    • viewport: default | default

      The Cornerstone3D StackViewport or VolumeViewport.

    • subPixelResolution: number

      The number to divide the image pixel spacing by to get the sub pixel spacing. E.g. 10 will return spacings 10x smaller than the native image spacing.

    Returns { spacing: Point2; xDir: Point3; yDir: Point3 }

    The spacings of the X and Y directions, and the 3D directions of the x and y directions.

    • spacing: Point2
    • xDir: Point3
    • yDir: Point3

getWindingDirection

  • getWindingDirection(polyline: Point2[]): number
  • Calculate the winding direction (CW or CCW) of a polyline


    Parameters

    • polyline: Point2[]

      Polyline (2D)

    Returns number

    1 for CW or -1 for CCW polylines

intersectPolyline

  • intersectPolyline(sourcePolyline: Point2[], targetPolyline: Point2[]): boolean
  • Check if two polylines intersect comparing line segment by line segment.


    Parameters

    • sourcePolyline: Point2[]

      Source polyline

    • targetPolyline: Point2[]

      Target polyline

    Returns boolean

    True if the polylines intersect or false otherwise

isClosed

  • isClosed(polyline: Point2[]): boolean
  • A polyline is considered closed if the start and end points are at the same position


    Parameters

    • polyline: Point2[]

      Polyline points (2D)

    Returns boolean

    True if the polyline is already closed or false otherwise

isPointInsidePolyline3D

  • isPointInsidePolyline3D(point: Point3, polyline: Point3[], options?: { holes?: Point3[][] }): boolean
  • Determines whether a 3D point is inside a polyline in 3D space.

    The algorithm works by reducing the polyline and point to 2D space, and then using the 2D algorithm to determine whether the point is inside the polyline.

    @throws

    An error if a shared dimension index cannot be found for the polyline points.


    Parameters

    • point: Point3

      The 3D point to test.

    • polyline: Point3[]

      The polyline represented as an array of 3D points.

    • options: { holes?: Point3[][] } = {}

    Returns boolean

    A boolean indicating whether the point is inside the polyline.

mergePolylines

  • mergePolylines(targetPolyline: Point2[], sourcePolyline: Point2[]): Point2[]
  • Merge two planar polylines (2D)


    Parameters

    • targetPolyline: Point2[]
    • sourcePolyline: Point2[]

    Returns Point2[]

pointCanProjectOnLine

  • pointCanProjectOnLine(p: Point2, p1: Point2, p2: Point2, proximity: number): boolean
  • Returns true if the point p can project onto point (p1, p2), and if this projected point is less than proximity units away.


    Parameters

    • p: Point2
    • p1: Point2
    • p2: Point2
    • proximity: number

    Returns boolean

pointsAreWithinCloseContourProximity

  • pointsAreWithinCloseContourProximity(p1: Point2, p2: Point2, closeContourProximity: number): boolean
  • Returns true if points p1 and p2 are within closeContourProximity.


    Parameters

    • p1: Point2
    • p2: Point2
    • closeContourProximity: number

    Returns boolean

projectTo2D

  • projectTo2D(polyline: Point3[]): { projectedPolyline: Point2[]; sharedDimensionIndex: any }
  • Projects a polyline from 3D to 2D by reducing one dimension.

    @throws

    Error if a shared dimension index cannot be found for the polyline.


    Parameters

    • polyline: Point3[]

      The polyline to be projected.

    Returns { projectedPolyline: Point2[]; sharedDimensionIndex: any }

    An object containing the shared dimension index and the projected polyline in 2D.

    • projectedPolyline: Point2[]
    • sharedDimensionIndex: any

subtractPolylines

  • subtractPolylines(targetPolyline: Point2[], sourcePolyline: Point2[]): Types.Point2[][]
  • Subtract two planar polylines (2D)


    Parameters

    • targetPolyline: Point2[]
    • sourcePolyline: Point2[]

    Returns Types.Point2[][]

rectangle

rectangle:

distanceToPoint

  • distanceToPoint(rect: number[], point: Point2): number
  • Calculates distance of the point to the rectangle. It calculates the minimum distance between the point and each line segment of the rectangle.


    Parameters

    • rect: number[]

      coordinates of the rectangle [left, top, width, height]

    • point: Point2

      [x,y] coordinates of a point

    Returns number

vec2

vec2:

findClosestPoint

  • findClosestPoint(sourcePoints: Point2[], targetPoint: Point2): Types.Point2
  • Find the closest point to the target point


    Parameters

    • sourcePoints: Point2[]

      The potential source points.

    • targetPoint: Point2

      The target point, used to find the closest source.

    Returns Types.Point2

    The closest point in the array of point sources

liangBarksyClip

  • liangBarksyClip(a: any, b: any, box: any, da?: any, db?: any): 1 | 0

  • Parameters

    • a: any
    • b: any
    • box: any

      [xmin, ymin, xmax, ymax]

    • optionalda: any
    • optionaldb: any

    Returns 1 | 0

orientation

orientation:

publicgetOrientationStringLPS

  • getOrientationStringLPS(vector: Point3): string
  • Returns the orientation of the vector in the patient coordinate system.


    Parameters

    • vector: Point3

      Input array

    Returns string

    The orientation in the patient coordinate system.

publicinvertOrientationStringLPS

  • invertOrientationStringLPS(orientationString: string): string
  • Inverts an orientation string.


    Parameters

    • orientationString: string

      The orientation.

    Returns string

    The inverted orientationString.

planar

planar:

filterAnnotationsForDisplay

  • filterAnnotationsForDisplay(viewport: IViewport, annotations: Annotations, filterOptions?: ReferenceCompatibleOptions): Annotations
  • Given the viewport and the annotations, it filters the annotations array and only return those annotation that should be displayed on the viewport


    Parameters

    • viewport: IViewport
    • annotations: Annotations

      Annotations

    • filterOptions: ReferenceCompatibleOptions = {}

    Returns Annotations

    A filtered version of the annotations.

filterAnnotationsWithinSlice

  • filterAnnotationsWithinSlice(annotations: Annotations, camera: ICamera, spacingInNormalDirection: number): Annotations
  • given some Annotations, and the slice defined by the camera’s normal direction and the spacing in the normal, filter the Annotations which is within the slice.


    Parameters

    • annotations: Annotations

      Annotations

    • camera: ICamera

      The camera

    • spacingInNormalDirection: number

      The spacing in the normal direction

    Returns Annotations

    The filtered Annotations.

getPointInLineOfSightWithCriteria

  • getPointInLineOfSightWithCriteria(viewport: default, worldPos: Point3, targetVolumeId: string, criteriaFunction: (intensity: number, point: Point3) => Point3, stepSize?: number): Types.Point3
  • Returns a point based on some criteria (e.g., minimum or maximum intensity) in the line of sight (on the line between the passed worldPosition and camera position). It iterated over the points with a step size on the line.


    Parameters

    • viewport: default

      Volume viewport

    • worldPos: Point3

      World coordinates of the clicked location

    • targetVolumeId: string

      target Volume ID in the viewport

    • criteriaFunction: (intensity: number, point: Point3) => Point3

      A function that returns the point if it passes a certain written logic, for instance, it can be a maxValue function that keeps the records of all intensity values, and only return the point if its intensity is greater than the maximum intensity of the points passed before.

    • stepSize: number = 0.25

    Returns Types.Point3

    the World pos of the point that passes the criteriaFunction

getWorldWidthAndHeightFromCorners

  • getWorldWidthAndHeightFromCorners(viewPlaneNormal: Point3, viewUp: Point3, topLeftWorld: Point3, bottomRightWorld: Point3): { worldHeight: number; worldWidth: number }
  • Given two world positions and an orthogonal view to an imageVolume defined by a viewPlaneNormal and a viewUp, get the width and height in world coordinates of the rectangle defined by the two points. The implementation works both with orthogonal non-orthogonal rectangles.


    Parameters

    • viewPlaneNormal: Point3

      The normal of the view.

    • viewUp: Point3

      The up direction of the view.

    • topLeftWorld: Point3

      The first world position.

    • bottomRightWorld: Point3

      The second world position.

    Returns { worldHeight: number; worldWidth: number }

    The worldWidth and worldHeight.

    • worldHeight: number
    • worldWidth: number

isPlaneIntersectingAABB

  • isPlaneIntersectingAABB(origin: any, normal: any, minX: any, minY: any, minZ: any, maxX: any, maxY: any, maxZ: any): boolean
  • Checks if a plane intersects with an Axis-Aligned Bounding Box (AABB).


    Parameters

    • origin: any

      The origin point of the plane.

    • normal: any

      The normal vector of the plane.

    • minX: any

      The minimum x-coordinate of the AABB.

    • minY: any

      The minimum y-coordinate of the AABB.

    • minZ: any

      The minimum z-coordinate of the AABB.

    • maxX: any

      The maximum x-coordinate of the AABB.

    • maxY: any

      The maximum y-coordinate of the AABB.

    • maxZ: any

      The maximum z-coordinate of the AABB.

    Returns boolean

    A boolean indicating whether the plane intersects with the AABB.

planarFreehandROITool

planarFreehandROITool:

smoothAnnotation

  • smoothAnnotation(enabledElement: IEnabledElement, annotation: PlanarFreehandROIAnnotation, knotsRatioPercentage: number): boolean
  • Interpolates a given annotation from a given enabledElement. It mutates annotation param. The param knotsRatioPercentage defines the percentage of points to be considered as knots on the interpolation process. Interpolation will be skipped in case: annotation is not present in enabledElement (or there is no toolGroup associated with it), related tool is being modified.


    Parameters

    • enabledElement: IEnabledElement
    • annotation: PlanarFreehandROIAnnotation
    • knotsRatioPercentage: number

    Returns boolean

polyDataUtils

polyDataUtils:

getPoint

  • getPoint(points: any, idx: any): Types.Point3
  • Gets a point from an array of numbers given its index


    Parameters

    • points: any

      array of number, each point defined by three consecutive numbers

    • idx: any

      index of the point to retrieve

    Returns Types.Point3

getPolyDataPointIndexes

  • getPolyDataPointIndexes(polyData: vtkPolyData): any[]
  • Extract contour point sets from the outline of a poly data actor


    Parameters

    • polyData: vtkPolyData

      vtk polyData

    Returns any[]

getPolyDataPoints

  • getPolyDataPoints(polyData: vtkPolyData): any[]
  • Extract contour points from a poly data object


    Parameters

    • polyData: vtkPolyData

      vtk polyData

    Returns any[]

rectangleROITool

rectangleROITool:

getBoundsIJKFromRectangleAnnotations

  • getBoundsIJKFromRectangleAnnotations(annotations: any, referenceVolume: any, options?: Options): any
  • Parameters

    • annotations: any
    • referenceVolume: any
    • options: Options = ...

    Returns any

isAxisAlignedRectangle

  • isAxisAlignedRectangle(rectangleCornersIJK: any): boolean
  • Determines whether a given rectangle in a 3D space (defined by its corner points in IJK coordinates) is aligned with the IJK axes.


    Parameters

    • rectangleCornersIJK: any

      The corner points of the rectangle in IJK coordinates

    Returns boolean

    True if the rectangle is aligned with the IJK axes, false otherwise

segmentation

segmentation:

contourAndFindLargestBidirectional

  • contourAndFindLargestBidirectional(segmentation: any): any
  • Generates a contour object over the segment, and then uses the contouring to find the largest bidirectional object that can be applied within the acquisition plane that is within the segment index, or the contained segment indices.


    Parameters

    • segmentation: any

    Returns any

createBidirectionalToolData

  • Creates data suitable for the BidirectionalTool from the basic bidirectional data object.


    Parameters

    Returns Annotation

createImageIdReferenceMap

  • createImageIdReferenceMap(imageIdsArray: string[], segmentationImageIds: string[]): Map<string, string>
  • Creates a map that associates each imageId with a set of segmentation imageIds. Note that this function assumes that the imageIds and segmentationImageIds arrays are the same length and same order.


    Parameters

    • imageIdsArray: string[]

      An array of imageIds.

    • segmentationImageIds: string[]

      An array of segmentation imageIds.

    Returns Map<string, string>

    A map that maps each imageId to a set of segmentation imageIds.

createLabelmapVolumeForViewport

  • createLabelmapVolumeForViewport(input: { options?: { dimensions: Point3; direction: Mat3; metadata: Metadata; origin: Point3; scalarData: Float32Array | Int16Array | Uint16Array | Uint8Array; spacing: Point3; targetBuffer: { type: Float32Array | Uint16Array | Uint8Array | Int8Array }; volumeId: string }; renderingEngineId: string; segmentationId?: string; viewportId: string }): Promise<string>
  • Create a new 3D segmentation volume from the default imageData presented in the first actor of the viewport. It looks at the metadata of the imageData to determine the volume dimensions and spacing if particular options are not provided.


    Parameters

    • input: { options?: { dimensions: Point3; direction: Mat3; metadata: Metadata; origin: Point3; scalarData: Float32Array | Int16Array | Uint16Array | Uint8Array; spacing: Point3; targetBuffer: { type: Float32Array | Uint16Array | Uint8Array | Int8Array }; volumeId: string }; renderingEngineId: string; segmentationId?: string; viewportId: string }

    Returns Promise<string>

    A promise that resolves to the Id of the new labelmap volume.

createMergedLabelmapForIndex

  • createMergedLabelmapForIndex(labelmaps: IImageVolume[], segmentIndex?: number, volumeId?: string): Types.IImageVolume
  • Given a list of labelmaps (with the possibility of overlapping regions), and a segmentIndex it creates a new labelmap with the same dimensions as the input labelmaps, but merges them into a single labelmap for the segmentIndex. It wipes out all other segment Indices. This is useful for calculating statistics regarding a specific segment when there are overlapping regions between labelmap (e.g. TMTV)


    Parameters

    • labelmaps: IImageVolume[]

      Array of labelmaps

    • segmentIndex: number = 1

      The segment index to merge

    • volumeId: string = 'mergedLabelmap'

    Returns Types.IImageVolume

    Merged labelmap

floodFill

  • floodFill.js - Taken from MIT OSS lib - https://github.com/tuzz/n-dimensional-flood-fill Refactored to ES6. Fixed the bounds/visits checks to use integer keys, restricting the total search spacing to +/- 32k in each dimension, but resulting in about a hundred time performance gain for larger regions since JavaScript does not have a hash map to allow the map to work on keys.


    Parameters

    • getter: FloodFillGetter

      The getter to the elements of your data structure, e.g. getter(x,y) for a 2D interprettation of your structure.

    • seed: Point2 | Point3

      The seed for your fill. The dimensionality is infered by the number of dimensions of the seed.

    • options: FloodFillOptions = {}

    Returns FloodFillResult

    Flood fill results

getBrushSizeForToolGroup

  • getBrushSizeForToolGroup(toolGroupId: string, toolName?: string): void
  • Gets the brush size for the first brush-based tool instance in a given tool group.


    Parameters

    • toolGroupId: string

      The ID of the tool group to get the brush size for.

    • optionaltoolName: string

      The name of the specific tool to get the brush size for (Optional) If not provided, the first brush-based tool instance in the tool group will be used.

    Returns void

    The brush size of the selected tool instance, or undefined if no brush-based tool instance is found.

getBrushThresholdForToolGroup

  • getBrushThresholdForToolGroup(toolGroupId: string): any
  • Parameters

    • toolGroupId: string

    Returns any

getBrushToolInstances

  • getBrushToolInstances(toolGroupId: string, toolName?: string): any[]
  • Parameters

    • toolGroupId: string
    • optionaltoolName: string

    Returns any[]

getDefaultRepresentationConfig

  • getDefaultRepresentationConfig(segmentation: Segmentation): LabelmapConfig
  • It returns a configuration object for the given representation type.


    Parameters

    Returns LabelmapConfig

    A representation configuration object.

getHoveredContourSegmentationAnnotation

  • getHoveredContourSegmentationAnnotation(segmentationId: any): number
  • Retrieves the index of the hovered contour segmentation annotation for a given segmentation ID.


    Parameters

    • segmentationId: any

      The ID of the segmentation.

    Returns number

    The index of the hovered contour segmentation annotation, or undefined if none is found.

getSegmentAtLabelmapBorder

  • getSegmentAtLabelmapBorder(segmentationId: string, worldPoint: Point3, options: Options): number
  • Retrieves the segment index at the border of a labelmap in a segmentation.


    Parameters

    • segmentationId: string

      The ID of the segmentation.

    • worldPoint: Point3

      The world coordinates of the point.

    • options: Options

      Additional options.

    Returns number

    The segment index at the labelmap border, or undefined if not found.

getSegmentAtWorldPoint

  • getSegmentAtWorldPoint(segmentationId: string, worldPoint: Point3, options?: Options): number
  • Get the segment at the specified world point in the viewport.


    Parameters

    • segmentationId: string

      The ID of the segmentation to get the segment for.

    • worldPoint: Point3

      The world point to get the segment for.

    • options: Options = ...

    Returns number

    The index of the segment at the world point, or undefined if not found.

getUniqueSegmentIndices

  • getUniqueSegmentIndices(segmentationId: any): any
  • Retrieves the unique segment indices from a given segmentation.

    @throws

    If no geometryIds are found for the segmentationId.


    Parameters

    • segmentationId: any

      The ID of the segmentation.

    Returns any

    An array of unique segment indices.

invalidateBrushCursor

  • invalidateBrushCursor(toolGroupId: string): void
  • Invalidates the brush cursor for a specific tool group. This function triggers the update of the brush being rendered. It also triggers an annotation render for any viewports on the tool group.


    Parameters

    • toolGroupId: string

      The ID of the tool group.

    Returns void

isValidRepresentationConfig

  • Given a representation type and a configuration, return true if the configuration is valid for that representation type


    Parameters

    • representationType: string

      The type of segmentation representation

    • config: RepresentationConfig

      RepresentationConfig

    Returns boolean

    A boolean value.

rectangleROIThresholdVolumeByRange

  • rectangleROIThresholdVolumeByRange(annotationUIDs: string[], segmentationVolume: IImageVolume, thresholdVolumeInformation: ThresholdInformation[], options: ThresholdOptions): Types.IImageVolume
  • It uses the provided rectangleROI annotations (either RectangleROIThreshold, or RectangleROIStartEndThreshold) to compute an ROI that is the intersection of all the annotations. Then it uses the rectangleROIThreshold utility to threshold the volume.


    Parameters

    • annotationUIDs: string[]

      rectangleROI annotationsUIDs to use for ROI

    • segmentationVolume: IImageVolume

      the segmentation volume

    • thresholdVolumeInformation: ThresholdInformation[]

      object array containing the volume data and range threshold values

    • options: ThresholdOptions

      options for thresholding

    Returns Types.IImageVolume

segmentContourAction

  • segmentContourAction(element: HTMLDivElement, configuration: any): any
  • Parameters

    • element: HTMLDivElement
    • configuration: any

    Returns any

setBrushSizeForToolGroup

  • setBrushSizeForToolGroup(toolGroupId: string, brushSize: number, toolName?: string): void
  • Sets the brush size for all brush-based tools in a given tool group.


    Parameters

    • toolGroupId: string

      The ID of the tool group to set the brush size for.

    • brushSize: number

      The new brush size to set.

    • optionaltoolName: string

      The name of the specific tool to set the brush size for (optional) If not provided, all brush-based tools in the tool group will be affected.

    Returns void

setBrushThresholdForToolGroup

  • setBrushThresholdForToolGroup(toolGroupId: string, threshold: Point2, otherArgs?: Record<string, unknown>): void
  • Parameters

    • toolGroupId: string
    • threshold: Point2
    • otherArgs: Record<string, unknown> = ...

    Returns void

thresholdSegmentationByRange

  • thresholdSegmentationByRange(segmentationVolume: IImageVolume, segmentationIndex: number, thresholdVolumeInformation: ThresholdInformation[], overlapType: number): Types.IImageVolume
  • It thresholds a segmentation volume based on a set of threshold values with respect to a list of volumes and respective threshold ranges.


    Parameters

    • segmentationVolume: IImageVolume

      the segmentation volume to be modified

    • segmentationIndex: number

      the index of the segmentation to modify

    • thresholdVolumeInformation: ThresholdInformation[]

      array of objects containing volume data and a range (lower and upper values) to threshold

    • overlapType: number

      indicates if the user requires all voxels pass (overlapType = 1) or any voxel pass (overlapType = 0)

    Returns Types.IImageVolume

thresholdVolumeByRange

  • thresholdVolumeByRange(segmentationVolume: IImageVolume, thresholdVolumeInformation: ThresholdInformation[], options: ThresholdRangeOptions): Types.IImageVolume
  • It thresholds a segmentation volume based on a set of threshold values with respect to a list of volumes and respective threshold ranges.


    Parameters

    • segmentationVolume: IImageVolume

      the segmentation volume to be modified

    • thresholdVolumeInformation: ThresholdInformation[]

      array of objects containing volume data and a range (lower and upper values) to threshold

    • options: ThresholdRangeOptions

      the options for thresholding As there is a chance the volumes might have different dimensions and spacing, could be the case of no 1 to 1 mapping. So we need to work with the idea of voxel overlaps (1 to many mappings). We consider all intersections valid, to avoid the complexity to calculate a minimum voxel intersection percentage. This function, given a voxel center and spacing, calculates the overlap of the voxel with another volume and range check the voxels in the overlap. Three situations can occur: all voxels pass the range check, some voxels pass or none voxels pass. The overlapType parameter indicates if the user requires all voxels pass (overlapType = 1) or any voxel pass (overlapType = 0)

    Returns Types.IImageVolume

    segmented volume

triggerSegmentationRender

  • triggerSegmentationRender(toolGroupId: string): void
  • It triggers a render for all the segmentations of the tool group with the given Id.


    Parameters

    • toolGroupId: string

      The Id of the tool group to render.

    Returns void

touch

touch:

copyPoints

copyPointsList

  • Copies a set of points.


    Parameters

    Returns ITouchPoints[]

    A copy of the points.

getDeltaDistance

  • Returns the distance between multiple IPoints instances.


    Parameters

    • currentPoints: IPoints[]

      The current points.

    • lastPoints: IPoints[]

      The last points, to be subtracted from the currentPoints.

    Returns IDistance

    The distance difference in IDistance format

getDeltaDistanceBetweenIPoints

  • Returns the distance difference between multiple IPoints instances.


    Parameters

    • currentPoints: IPoints[]

      The current points.

    • lastPoints: IPoints[]

      The last points.

    Returns IDistance

    The difference in IPoints format

getDeltaPoints

  • Returns the difference between multiple IPoints instances.


    Parameters

    • currentPoints: IPoints[]

      The current points.

    • lastPoints: IPoints[]

      The last points, to be subtracted from the currentPoints.

    Returns IPoints

    The difference in IPoints format

getDeltaRotation

getMeanPoints

getMeanTouchPoints

viewport

viewport:

jumpToSlice

Re-exports jumpToSlice

isViewportPreScaled

  • isViewportPreScaled(viewport: default | default, targetId: string): boolean
  • Parameters

    • viewport: default | default
    • targetId: string

    Returns boolean

jumpToWorld

  • jumpToWorld(viewport: default, jumpWorld: Point3): true | undefined
  • Uses the viewport’s current camera to jump to a specific world coordinate


    Parameters

    • viewport: default
    • jumpWorld: Point3

      location in the world to jump to

    Returns true | undefined

    True if successful

viewportFilters

viewportFilters:

filterViewportsWithFrameOfReferenceUID

  • filterViewportsWithFrameOfReferenceUID(viewports: IViewport[], FrameOfReferenceUID: string): (Types.IStackViewport | Types.IVolumeViewport)[]
  • Given an array of viewports, returns a list of viewports that are viewing a world space with the given FrameOfReferenceUID.


    Parameters

    • viewports: IViewport[]

      An array of viewports.

    • FrameOfReferenceUID: string

      The UID defining a particular world space/Frame Of Reference.

    Returns (Types.IStackViewport | Types.IVolumeViewport)[]

    A filtered array of viewports.

filterViewportsWithParallelNormals

  • filterViewportsWithParallelNormals(viewports: any, camera: any, EPS?: number): any
  • It filters the viewports that are looking in the same view as the camera It basically checks if the viewPlaneNormal is parallel to the camera viewPlaneNormal


    Parameters

    • viewports: any

      Array of viewports to filter

    • camera: any

      Camera to compare against

    • EPS: number = 0.999

    Returns any

    • Array of viewports with the same view

filterViewportsWithToolEnabled

  • filterViewportsWithToolEnabled(viewports: IViewport[], toolName: string): (Types.IStackViewport | Types.IVolumeViewport)[]
  • Given an array of viewports, returns a list of viewports that have the the specified tool enabled.


    Parameters

    • viewports: IViewport[]

      An array of viewports.

    • toolName: string

      The name of the tool to filter on.

    Returns (Types.IStackViewport | Types.IVolumeViewport)[]

    A filtered array of viewports.

getViewportIdsWithToolToRender

  • getViewportIdsWithToolToRender(element: HTMLDivElement, toolName: string, requireParallelNormals?: boolean): string[]
  • Given a cornerstone3D enabled element, and a toolName, find all viewportIds looking at the same Frame Of Reference that have the tool with the given toolName active, passive or enabled.


    Parameters

    • element: HTMLDivElement

      The target cornerstone3D enabled element.

    • toolName: string

      The string toolName.

    • requireParallelNormals: boolean = true

      If true, only return viewports that have parallel normals.

    Returns string[]

    An array of viewportIds.

voi

voi:

colorbar

colorbar:

Enums

Enums:

ColorbarRangeTextPosition

ColorbarRangeTextPosition:

Specify the position of the text/ticks. Left/Right are the valid options for a vertical colorbars and Top/Bottom for the horizontal ones.

Bottom

Bottom: bottom

Left

Left: left

Right

Right: right

Top

Top: top

Types

Types:

ColorbarCommonProps

ColorbarCommonProps: { imageRange?: ColorbarImageRange; showFullPixelValueRange?: boolean; ticks?: { position?: ColorbarRangeTextPosition; style?: ColorbarTicksStyle }; voiRange?: ColorbarVOIRange }

Type declaration

  • optionalimageRange?: ColorbarImageRange
  • optionalshowFullPixelValueRange?: boolean
  • optionalticks?: { position?: ColorbarRangeTextPosition; style?: ColorbarTicksStyle }
    • optionalposition?: ColorbarRangeTextPosition
    • optionalstyle?: ColorbarTicksStyle
  • optionalvoiRange?: ColorbarVOIRange

ColorbarImageRange

ColorbarImageRange: { lower: number; upper: number }

Type declaration

  • lower: number
  • upper: number

ColorbarProps

ColorbarProps: (WidgetProps & ColorbarCommonProps) & { activeColormapName?: string; colormaps: IColorMapPreset[] }

ColorbarSize

ColorbarSize: { height: number; width: number }

Type declaration

  • height: number
  • width: number

ColorbarTicksProps

ColorbarTicksProps: ColorbarCommonProps & { container?: HTMLElement; left?: number; size?: ColorbarSize; top?: number }

ColorbarTicksStyle

ColorbarTicksStyle: { color?: string; font?: string; labelMargin?: number; maxNumTicks?: number; tickSize?: number; tickWidth?: number }

Type declaration

  • optionalcolor?: string
  • optionalfont?: string
  • optionallabelMargin?: number
  • optionalmaxNumTicks?: number
  • optionaltickSize?: number
  • optionaltickWidth?: number

ColorbarVOIRange

ColorbarVOIRange: ColorbarImageRange

ViewportColorbarProps

ViewportColorbarProps: ColorbarProps & { element: HTMLDivElement; volumeId?: string }

Colorbar

Colorbar:

A base colorbar class that is not associated with any viewport. It is possible to click and drag to change the VOI range, shows the ticks during interaction and it can show full image range or VOI range.

constructor

  • new Colorbar(props: ColorbarProps): Colorbar
  • Parameters

    • props: ColorbarProps

    Returns Colorbar

publicactiveColormapName

  • get activeColormapName(): string
  • set activeColormapName(colormapName: string): void
  • Returns the active LUT name


    Returns string

  • Set the current active LUT name and re-renders the color bar


    Parameters

    • colormapName: string

    Returns void

publicid

  • get id(): string
  • Widget id


    Returns string

publicimageRange

  • get imageRange(): ColorbarImageRange
  • set imageRange(imageRange: ColorbarImageRange): void
  • Returns ColorbarImageRange

  • Parameters

    • imageRange: ColorbarImageRange

    Returns void

publicrootElement

  • get rootElement(): HTMLElement
  • Widget’s root element


    Returns HTMLElement

publicshowFullImageRange

  • get showFullImageRange(): boolean
  • set showFullImageRange(value: boolean): void
  • Returns boolean

  • Parameters

    • value: boolean

    Returns void

publicvoiRange

  • get voiRange(): ColorbarImageRange
  • set voiRange(voiRange: ColorbarImageRange): void
  • Returns ColorbarImageRange

  • Parameters

    • voiRange: ColorbarImageRange

    Returns void

public_createTicksBar

  • _createTicksBar(props: ColorbarProps): ColorbarTicks
  • Parameters

    • props: ColorbarProps

    Returns ColorbarTicks

publicappendTo

  • appendTo(container: HTMLElement): void
  • Append the widget to a parent element


    Parameters

    • container: HTMLElement

      HTML element where the widget should be added to

    Returns void

publicdestroy

  • destroy(): void
  • Returns void

ViewportColorbar

ViewportColorbar:

A colorbar associated with a viewport that updates automatically when the viewport VOI changes or when the stack/volume are updated..

constructor

  • new ViewportColorbar(props: ViewportColorbarProps): ViewportColorbar
  • Parameters

    • props: ViewportColorbarProps

    Returns ViewportColorbar

publicactiveColormapName

  • get activeColormapName(): string
  • set activeColormapName(colormapName: string): void
  • Returns the active LUT name


    Returns string

  • Set the current active LUT name and re-renders the color bar


    Parameters

    • colormapName: string

    Returns void

publicelement

  • get element(): HTMLDivElement
  • Returns HTMLDivElement

publicenabledElement

  • get enabledElement(): IEnabledElement
  • Returns IEnabledElement

publicid

  • get id(): string
  • Widget id


    Returns string

publicimageRange

  • get imageRange(): ColorbarImageRange
  • set imageRange(imageRange: ColorbarImageRange): void
  • Returns ColorbarImageRange

  • Parameters

    • imageRange: ColorbarImageRange

    Returns void

publicrootElement

  • get rootElement(): HTMLElement
  • Widget’s root element


    Returns HTMLElement

publicshowFullImageRange

  • get showFullImageRange(): boolean
  • set showFullImageRange(value: boolean): void
  • Returns boolean

  • Parameters

    • value: boolean

    Returns void

publicvoiRange

  • get voiRange(): ColorbarImageRange
  • set voiRange(voiRange: ColorbarImageRange): void
  • Returns ColorbarImageRange

  • Parameters

    • voiRange: ColorbarImageRange

    Returns void

public_createTicksBar

  • _createTicksBar(props: ColorbarProps): ColorbarTicks
  • Parameters

    • props: ColorbarProps

    Returns ColorbarTicks

publicappendTo

  • appendTo(container: HTMLElement): void
  • Append the widget to a parent element


    Parameters

    • container: HTMLElement

      HTML element where the widget should be added to

    Returns void

publicdestroy

  • destroy(): void
  • Returns void

windowLevel

windowLevel:

calculateMinMaxMean

  • calculateMinMaxMean(pixelLuminance: any, globalMin: any, globalMax: any): { max: any; mean: number; min: any }
  • Parameters

    • pixelLuminance: any
    • globalMin: any
    • globalMax: any

    Returns { max: any; mean: number; min: any }

    • max: any
    • mean: number
    • min: any

extractWindowLevelRegionToolData

  • extractWindowLevelRegionToolData(viewport: any): { color: any; columns: any; height: any; maxPixelValue: number; minPixelValue: number; rows: any; scalarData: any; width: any }
  • Parameters

    • viewport: any

    Returns { color: any; columns: any; height: any; maxPixelValue: number; minPixelValue: number; rows: any; scalarData: any; width: any }

    • color: any
    • columns: any
    • height: any
    • maxPixelValue: number
    • minPixelValue: number
    • rows: any
    • scalarData: any
    • width: any

getLuminanceFromRegion

  • getLuminanceFromRegion(imageData: any, x: any, y: any, width: any, height: any): any[]
  • Extracts the luminance values from a specified region of an image.


    Parameters

    • imageData: any

      The image data object containing pixel information.

    • x: any

      The x-coordinate of the top-left corner of the region.

    • y: any

      The y-coordinate of the top-left corner of the region.

    • width: any

      The width of the region.

    • height: any

      The height of the region.

    Returns any[]

    An array containing the luminance values of the specified region.

Classes

annotationFrameRange

annotationFrameRange:

This class handles the annotation frame range values for multiframes. Mostly used for the Video viewport, it allows references to a range of frame values.

constructor

  • new annotationFrameRange(): default

publicstaticframesToString

  • framesToString(range: any): string
  • Parameters

    • range: any

    Returns string

publicstaticgetFrameRange

  • getFrameRange(annotation: Annotation): number | [number, number]
  • Parameters

    Returns number | [number, number]

publicstaticsetFrameRange

  • setFrameRange(annotation: Annotation, range: string | FramesRange, eventBase?: { renderingEngineId: any; viewportId: any }): void
  • Sets the range of frames to associate with the given annotation. The range can be a single frame number (1 based according to DICOM), or a range of values in the format min-max where min, max are inclusive Modifies the referencedImageID to specify the updated URL.


    Parameters

    • annotation: Annotation
    • range: string | FramesRange
    • optionaleventBase: { renderingEngineId: any; viewportId: any }

    Returns void

Variables

conststackContextPrefetch

stackContextPrefetch: { disable: (element: any) => void; enable: (element: any) => void; getConfiguration: () => { directionExtraImages: number; maxAfter: number; maxImagesToPrefetch: number; minBefore: number; preserveExistingPool: boolean }; setConfiguration: (config: any) => void } = ...

Type declaration

  • disable: (element: any) => void
      • (element: any): void
      • Parameters

        • element: any

        Returns void

  • enable: (element: any) => void
      • (element: any): void
      • Call this to enable stack context sensitive prefetch. Should be called before stack data is set in order to start prefetch after load first image. This will add a STACK_NEW_IMAGE to detect when a new image is displayed, and then update the prefetch stack. The context sensitive prefetch reacts to the initial display, or significant moves, the already loaded images, the cache size and the direction of navigation. The behaviour is:

        1. On navigating to a new image initially, or one that is at a different position:
        • Fetch the next/previous 2 images
        1. If the user is navigating forward/backward by less than 5 images, then
        • Prefetch additional images in the direction of navigation, up to 100
        1. If all the images in a given prefetch have completed, then:
        • Use the last prefetched image size as an image size for the stack
        • Fetch up to 1/4 of the cache size images near the current image

        This is designed to:

        • Get nearby images immediately so that they are available for navigation
          • Under the assumption that users might click and view an image, then navigate to next/previous image to see the exact image they want
        • Not interfere with loading other viewports if they are still loading
          • Load priority is prefetch, and minimal images are requested initially
        • Load an entire series if it will fit in memory
          • Allows navigating to other parts of the series and display images immediately
        • Have images available for CINE/navigation in one direction even when there is more image data than will fit in memory.
          • Up to 100 images in the direction of travel will be prefetched

        Parameters

        • element: any

          to prefetch on

        Returns void

  • getConfiguration: () => { directionExtraImages: number; maxAfter: number; maxImagesToPrefetch: number; minBefore: number; preserveExistingPool: boolean }
      • (): { directionExtraImages: number; maxAfter: number; maxImagesToPrefetch: number; minBefore: number; preserveExistingPool: boolean }
      • Returns { directionExtraImages: number; maxAfter: number; maxImagesToPrefetch: number; minBefore: number; preserveExistingPool: boolean }

        • directionExtraImages: number
        • maxAfter: number
        • maxImagesToPrefetch: number
        • minBefore: number
        • preserveExistingPool: boolean
  • setConfiguration: (config: any) => void
      • (config: any): void
      • Parameters

        • config: any

        Returns void

conststackPrefetch

stackPrefetch: { disable: (element: any) => void; enable: (element: any) => void; getConfiguration: () => { maxImagesToPrefetch: number; preserveExistingPool: boolean }; setConfiguration: (config: any) => void } = ...

Type declaration

  • disable: (element: any) => void
      • (element: any): void
      • Parameters

        • element: any

        Returns void

  • enable: (element: any) => void
      • (element: any): void
      • Parameters

        • element: any

        Returns void

  • getConfiguration: () => { maxImagesToPrefetch: number; preserveExistingPool: boolean }
      • (): { maxImagesToPrefetch: number; preserveExistingPool: boolean }
      • Returns { maxImagesToPrefetch: number; preserveExistingPool: boolean }

        • maxImagesToPrefetch: number
        • preserveExistingPool: boolean
  • setConfiguration: (config: any) => void
      • (config: any): void
      • Parameters

        • config: any

        Returns void

Functions

calibrateImageSpacing

  • calibrateImageSpacing(imageId: string, renderingEngine: default, calibrationOrScale: number | IImageCalibration): void
  • It adds the provided spacing to the Cornerstone internal calibratedPixelSpacing metadata provider, then it invalidates all the tools that have the imageId as their reference imageIds. Finally, it triggers a re-render for invalidated annotations.


    Parameters

    • imageId: string

      ImageId for the calibrated image

    • renderingEngine: default
    • calibrationOrScale: number | IImageCalibration

      either the calibration object or a scale value

    Returns void

publicclip

  • clip(val: number, low: number, high: number): number
  • Clips a value to an upper and lower bound.

    @export
    @method
    @name

    clip


    Parameters

    • val: number

      The value to clip.

    • low: number

      The lower bound.

    • high: number

      The upper bound.

    Returns number

    The clipped value.

debounce

  • debounce(func: Function, wait?: number, options?: { leading: boolean; maxWait: number; trailing: boolean }): Function
  • Creates a debounced function that delays invoking func until after wait milliseconds have elapsed since the last time the debounced function was invoked, or until the next browser frame is drawn. The debounced function comes with a cancel method to cancel delayed func invocations and a flush method to immediately invoke them. Provide options to indicate whether func should be invoked on the leading and/or trailing edge of the wait timeout. The func is invoked with the last arguments provided to the debounced function. Subsequent calls to the debounced function return the result of the last func invocation.

    Note: If leading and trailing options are true, func is invoked on the trailing edge of the timeout only if the debounced function is invoked more than once during the wait timeout.

    If wait is 0 and leading is false, func invocation is deferred until the next tick, similar to setTimeout with a timeout of 0.

    If wait is omitted in an environment with requestAnimationFrame, func invocation will be deferred until the next frame is drawn (typically about 16ms).

    See David Corbacho’s article for details over the differences between debounce and throttle.

    @example

    // Avoid costly calculations while the window size is in flux. jQuery(window).on(‘resize’, debounce(calculateLayout, 150))

    // Invoke sendMail when clicked, debouncing subsequent calls. jQuery(element).on(‘click’, debounce(sendMail, 300, { ‘leading’: true, ‘trailing’: false }))

    // Ensure batchLog is invoked once after 1 second of debounced calls. const debounced = debounce(batchLog, 250, { ‘maxWait’: 1000 }) const source = new EventSource(‘/stream’) jQuery(source).on(‘message’, debounced)

    // Cancel the trailing debounced invocation. jQuery(window).on(‘popstate’, debounced.cancel)

    // Check for pending invocations. const status = debounced.pending() ? “Pending…” : “Ready”


    Parameters

    • func: Function

      The function to debounce.

    • optionalwait: number

      The number of milliseconds to delay; if omitted, requestAnimationFrame is used (if available).

    • optionaloptions: { leading: boolean; maxWait: number; trailing: boolean }

      The options object.

    Returns Function

    Returns the new debounced function.

getAnnotationNearPoint

  • getAnnotationNearPoint(element: HTMLDivElement, canvasPoint: Point2, proximity?: number): Annotation | null
  • Get the annotation that is close to the provided canvas point, it will return the first annotation that is found.


    Parameters

    • element: HTMLDivElement

      The element to search for an annotation on.

    • canvasPoint: Point2

      The canvasPoint on the page where the user clicked.

    • proximity: number = 5

      The distance from the canvasPoint to the annotation.

    Returns Annotation | null

    The annotation for the element

getAnnotationNearPointOnEnabledElement

  • getAnnotationNearPointOnEnabledElement(enabledElement: IEnabledElement, point: Point2, proximity: number): Annotation | null
  • “Find the annotation near the point on the enabled element.” it will return the first annotation that is found.


    Parameters

    • enabledElement: IEnabledElement

      The element that is currently active.

    • point: Point2

      The point to search near.

    • proximity: number

      The distance from the point that the annotation must be within.

    Returns Annotation | null

    A Annotation object.

getCalibratedAspect

  • getCalibratedAspect(image: any): any
  • Gets the aspect ratio of the screen display relative to the image display in order to square up measurement values. That is, suppose the spacing on the image is 1, 0.5 (x,y spacing) This is displayed at 1, 1 spacing on screen, then the aspect value will be 1/0.5 = 2


    Parameters

    • image: any

    Returns any

getCalibratedLengthUnitsAndScale

  • getCalibratedLengthUnitsAndScale(image: any, handles: any): { areaUnits: string; scale: number; units: string }
  • Extracts the calibrated length units, area units, and the scale for converting from internal spacing to image spacing.


    Parameters

    • image: any

      to extract the calibration from

    • handles: any

      to detect if spacing information is different between points

    Returns { areaUnits: string; scale: number; units: string }

    Object containing the units, area units, and scale

    • areaUnits: string
    • scale: number
    • units: string

getCalibratedProbeUnitsAndValue

  • getCalibratedProbeUnitsAndValue(image: any, handles: any): { calibrationType: undefined; units: string[]; values: any[] } | { calibrationType: string; units: string[]; values: any[] }
  • Parameters

    • image: any
    • handles: any

    Returns { calibrationType: undefined; units: string[]; values: any[] } | { calibrationType: string; units: string[]; values: any[] }

getSphereBoundsInfo

  • getSphereBoundsInfo(circlePoints: [Point3, Point3], imageData: vtkImageData, viewport: any): { bottomRightWorld: Types.Point3; boundsIJK: BoundsIJK; centerWorld: Types.Point3; radiusWorld: number; topLeftWorld: Types.Point3 }
  • Given an imageData, and the great circle top and bottom points of a sphere, this function will run the callback for each point of the imageData that is within the sphere defined by the great circle points. If the viewport is provided, region of interest will be an accurate approximation of the sphere (using viewport camera), and the resulting performance will be better.


    Parameters

    • circlePoints: [Point3, Point3]

      bottom and top points of the great circle in world coordinates

    • imageData: vtkImageData

      The volume imageData

    • viewport: any

    Returns { bottomRightWorld: Types.Point3; boundsIJK: BoundsIJK; centerWorld: Types.Point3; radiusWorld: number; topLeftWorld: Types.Point3 }

    • bottomRightWorld: Types.Point3
    • boundsIJK: BoundsIJK
    • centerWorld: Types.Point3
    • radiusWorld: number
    • topLeftWorld: Types.Point3

getViewportForAnnotation

  • getViewportForAnnotation(annotation: Annotation): default | default
  • Finds a matching viewport in terms of the orientation of the annotation data and the frame of reference. This doesn’t mean the annotation IS being displayed in the viewport, just that it could be by navigating the slice, and/or pan/zoom, without changing the orientation.


    Parameters

    • annotation: Annotation

      to find a viewport that it could display in

    Returns default | default

    The viewport to display in

isObject

  • isObject(value: any): boolean
  • Checks if value is the language type of Object. (e.g. arrays, functions, objects, regexes, new Number(0), and new String(''))

    @since

    0.1.0

    @example
    isObject({})
    // => true

    isObject([1, 2, 3])
    // => true

    isObject(Function)
    // => true

    isObject(null)
    // => false

    Parameters

    • value: any

      The value to check.

    Returns boolean

    Returns true if value is an object, else false.

jumpToSlice

  • It uses the imageIndex in the Options to scroll to the slice that is intended. It works for both Stack and Volume viewports. In VolumeViewports, the imageIndex should be given with respect to the index in the 3D image in the view direction (i.e. the index of the slice in Axial, Sagittal, Coronal, or Oblique).


    Parameters

    • element: HTMLDivElement

      the HTML Div element scrolling inside

    • options: JumpToSliceOptions = ...

      the options used for jumping to a slice

    Returns Promise<void>

    Promise that resolves to ImageIdIndex

pointInShapeCallback

  • pointInShapeCallback(imageData: vtkImageData | CPUImageData, pointInShapeFn: ShapeFnCriteria, callback?: PointInShapeCallback, boundsIJK?: BoundsIJK): PointInShape[]
  • For each point in the image (If boundsIJK is not provided, otherwise, for each point in the provided bounding box), It runs the provided callback IF the point passes the provided criteria to be inside the shape (which is defined by the provided pointInShapeFn)


    Parameters

    • imageData: vtkImageData | CPUImageData

      The image data object.

    • pointInShapeFn: ShapeFnCriteria

      A function that takes a point in LPS space and returns true if the point is in the shape and false if it is not.

    • optionalcallback: PointInShapeCallback

      A function that will be called for every point in the shape.

    • optionalboundsIJK: BoundsIJK

      The bounds of the volume in IJK coordinates.

    Returns PointInShape[]

pointInSurroundingSphereCallback

  • pointInSurroundingSphereCallback(imageData: vtkImageData, circlePoints: [Point3, Point3], callback: PointInShapeCallback, viewport?: default): void
  • Given an imageData, and the great circle top and bottom points of a sphere, this function will run the callback for each point of the imageData that is within the sphere defined by the great circle points. If the viewport is provided, region of interest will be an accurate approximation of the sphere (using viewport camera), and the resulting performance will be better.


    Parameters

    • imageData: vtkImageData

      The volume imageData

    • circlePoints: [Point3, Point3]

      bottom and top points of the great circle in world coordinates

    • callback: PointInShapeCallback

      A callback function that will be called for each point in the shape.

    • optionalviewport: default

    Returns void

pointToString

  • pointToString(point: any, decimals?: number): string
  • Parameters

    • point: any
    • decimals: number = 5

    Returns string

roundNumber

  • roundNumber(value: string | number | (string | number)[], precision?: number): string
  • Parameters

    • value: string | number | (string | number)[]
    • optionalprecision: number

    Returns string

scroll

  • It scrolls one slice in the Stack or Volume Viewport, it uses the options provided to determine the slice to scroll to. For Stack Viewport, it scrolls in the 1 or -1 direction, for Volume Viewport, it uses the camera and focal point to determine the slice to scroll to based on the spacings.


    Parameters

    • viewport: IViewport

      The viewport in which to scroll

    • options: ScrollOptions

      Options to use for scrolling, including direction, invert, and volumeId

    Returns void

throttle

  • throttle(func: Function, wait?: number, options?: { leading: boolean; trailing: boolean }): Function
  • Creates a throttled function that only invokes func at most once per every wait milliseconds (or once per browser frame). The throttled function comes with a cancel method to cancel delayed func invocations and a flush method to immediately invoke them. Provide options to indicate whether func should be invoked on the leading and/or trailing edge of the wait timeout. The func is invoked with the last arguments provided to the throttled function. Subsequent calls to the throttled function return the result of the last func invocation.

    Note: If leading and trailing options are true, func is invoked on the trailing edge of the timeout only if the throttled function is invoked more than once during the wait timeout.

    If wait is 0 and leading is false, func invocation is deferred until the next tick, similar to setTimeout with a timeout of 0.

    If wait is omitted in an environment with requestAnimationFrame, func invocation will be deferred until the next frame is drawn (typically about 16ms).

    See David Corbacho’s article for details over the differences between throttle and debounce.

    @example

    // Avoid excessively updating the position while scrolling. jQuery(window).on(‘scroll’, throttle(updatePosition, 100))

    // Invoke renewToken when the click event is fired, but not more than once every 5 minutes. const throttled = throttle(renewToken, 300000, { ‘trailing’: false }) jQuery(element).on(‘click’, throttled)

    // Cancel the trailing throttled invocation. jQuery(window).on(‘popstate’, throttled.cancel)


    Parameters

    • func: Function

      The function to throttle.

    • optionalwait: number

      The number of milliseconds to throttle invocations to; if omitted, requestAnimationFrame is used (if available).

    • optionaloptions: { leading: boolean; trailing: boolean }

      The options object.

    Returns Function

    Returns the new throttled function.

triggerAnnotationRender

  • triggerAnnotationRender(element: HTMLDivElement): void
  • It triggers the rendering of the annotations for the given HTML element using the AnnotationRenderingEngine


    Parameters

    • element: HTMLDivElement

      The element to render the annotation on.

    Returns void

triggerAnnotationRenderForToolGroupIds

  • triggerAnnotationRenderForToolGroupIds(toolGroupIds: string[]): void
  • Triggers annotation rendering for the specified tool group IDs.


    Parameters

    • toolGroupIds: string[]

      An array of tool group IDs.

    Returns void

triggerAnnotationRenderForViewportIds

  • triggerAnnotationRenderForViewportIds(renderingEngine: default, viewportIdsToRender: string[]): void
  • Parameters

    • renderingEngine: default
    • viewportIdsToRender: string[]

    Returns void

triggerEvent

  • triggerEvent(el: EventTarget, type: string, detail?: unknown): boolean
  • Parameters

    • el: EventTarget
    • type: string
    • optionaldetail: unknown

    Returns boolean