Showing posts with label imaging. Show all posts
Showing posts with label imaging. Show all posts

Friday, November 29, 2019

RT Image Review Worklist Triage

In  this previous post on better image review workflows, David Clunie's noted zombie apocalypse was imminent as we waited for better predictive models for image review behavior.  Well, the zombie apocalypse never happened.  But I have made some progress in sketching a basic model to analyze usage patterns.

The inputs are:
  • Time of day of physician login / workstation location
  • Time of day of image acquisition
  • Time offfset of image approval
  • Image feature vector, in some latent space of patient geometry
Then these are regressed to get a prediction of likelihood of image being opened on a workstation, at a given time.

What can be done with this prediction?  Caching on the local drive could be accomplished using:
  • Windows Offline Files
  • Dokan library, or similar
  • BDS Storage for local drive
Additionally, memory caching could be used
  • For a WCF-based SoA architectures, memory caching can be implemented either in the client (using a custom binding stack), or in the service as a custom IOperationInvoker.
  • MSQ 3D IRW already implements a memory caching for AVS-based cone beam CTs.
So with some implementation for the actual caching, we can fit the model and produce predictions.  If only we had some test data...

Stay tuned for a demo of the model, possibly with only fake input data.

Monday, August 14, 2017

Better Image Review Workflows Part II: DICOM Spatial Registrations

I remember meeting with our colleagues at Varian's Imaging Lab a few years back, to discuss (among other things) better ways of using DICOM Spatial Registration Objects (SROs) to encode radiotherapy setup corrections.  They had been considering support for some advanced features, like 4D CBCT and dynamic DRR generation.  

It was clear that having a reliable DICOM RT Plan Instance UID and referenced Patient Setup module would be immensely useful in interpreting the setup corrections.   But where would this information be stored?  As private attributes in the CBCT slices?  Or in the SRO itself?  The treatment record contains this relationship for each treatment field, but there is no way of encoding a reference to a standalone setup image within the treatment record (or so I've been told)


Siemens has, for years now, used a DICOM Structured Report to encode offset values, in lieu of an SRO, at least for portal images.  I think skipping the SRO entirely was a mistake, but when I look at the state of RT SROs today, it seems in retrospect that a semantically meaningful Structured Report to pull together the other objects (SRO,  treatment record, CBCT slices) would be quite an advanced over current practice.  Looking at some of the defined IODs for structured reports:
  • Mammography CAD Structured Report
  • Chest CAD Structured Report
  • Acquisition Context Structured Report
it does not seem too far fetched to consider an SR encoding relationships among a CBCT or portal image and their context objects.

I've started (again) reading David Clunie's Structured Report book, in hopes of gaining more insight in what can (or should not) be encoded using SRs.  It seems like a quite powerful capability--I'm wondering why the RT domain has not made more use of it...

Better Image Review Workflows Part I: Zombie Apocalypse

I was reading a bit on the developing RayCare OIS, and wondered why an optimization algorithm company would have an interest in developing an enterprise oncology informatics system.

It doesn't take much to realize that oncology informatics is rife with opportunities for applying optimization expertise.  Analysis of the flood of data is only the most obvious benefit from expertise in machine learning and statistical inference.  But even the bread and butter of oncology informatics--managing daily workflows and data--could benefit from this expertise.

For the data generated by daily workflows to be meaningfully used, the workflows (and supporting infrastructure) must be efficient.  Such an infrastructure needs to support workflows involving various actors (both people and systems), as well as a consistent data model shared by the actors to represent the entities participating in the workflows.

The DICOM Information Model for radiotherapy provides a relatively consistent, if somewhat incomplete (yet), model of how data is produced and consumed during RT treatment.  Such a data model can be used to help streamline workflows, as a semantically consistent data stream can be used as the input to the fabled prefetcher (see Clunie's zombie apocalypse scenario), in order to ensure the right data is available at the right time.  And. if we know enough about the workflows (i.e. what processing needs to be done; what areas of interest are to be reviewed), then we can even prepare this before hand, so when the reviewer picks the next item from the worklist, it is all ready to go.

So if RayCare's OIS can make use of some optimization expertise to build a better autonomous prefetch algorithm (and the associated pre-processing and softcopy presentation state determination algorithms), this would probably be a pretty nifty trick.

But getting all the sources of data to fit in to the DICOM information model, or (even better) fixing the DICOM information model to make it more suited for purpose, seems to be the big hiccup here.  Maybe having RayCare also working on the problem, things will begin to advance.  That's what free markets do, and we all want free markets, right?

Friday, March 17, 2017

Better Image Review User Experience

In my previous post, I mentioned three aspects of image review that can be improved.  The first of these is improvements in the user experience during image review.

The increased use of image-guidance during the course of treatment represents a significant source of knowledge about the delivery, but also an additional burden on daily routines.  The subjective tedium of working through a list of images can be lightened by enhancing aspects of the review user experience.

Usability concerns that can be addressed include:
  • Color schemes that are better adjusted for visual analysis, for instance with darker colors
  • Animated transitions to facilitate visual parsing of state changes, such as pop-up toolbars and changes in image selection
  • Use of spatial layout to convey semantic relationships, as in thumbnail and carousel presentations of images in temporal succession
  • Progressive rendering and dynamic resizing of image elements, to minimize variations in the availability of network resources
Combining these techniques in a single workspace can produce a compelling user experience for image review.



A number of architectural patterns are used to support these techniques.  Examples of these patterns are contained in the PheonixRt.Mvvm prototype.

Sunday, April 24, 2016

How to make image review better?

Image review is critical to modern IGRT, and will be equally important for adaptive RT.  Yet today most users complain that it is a painful chore.  So how do we make image review better?

I think three areas are critical:

  1. A better image review user experience
  2. Better workflows to support image review
  3. Advanced analysis tools to assist in the review process
I'll post my thoughts on areas of improvement for each of these.


Sunday, August 30, 2015

MVVM and SOA for scientific visualization

I've been working on an architecture that combines the simplicity of MVVM for UI interactions with the scalability, pluggability, and distributedness of SOA for processing data for visualization.  The first prototype I've implemented is in the form of the PheonixRt.Mvvm application (on github at https://github.com/dg1an3/PheonixRt.Mvvm)

PheonixRt.Mvvm is split in to two exe's:

  • Front-end
    • UI containing the MVVM
    • Interaction with back-end services is via service helpers (using standard .NET events)
  • Back-end
    • hosts the services responsible for pre-processing data
    • hosts services that visualize the data, such as calculating an MPR, a mesh intersection, or an isosurface
The important interface is between what processing should be done by services, versus what processing can be done by the View (which is WPF in this case).  The line I've drawn is that anything that has been reduced to a:

  • Bitmap (including alpha values)
  • 2D vector geometry, such as a line, polyline, or polygon
  • 2D transformation

can be exposed as bindable properties on the ViewModel, and then any additional rendering can be done by the View (for instance to add adornments, or other rendering styles).  So the services are necessary to turn the data in to these kinds of primitives, and WPF will take it from there.

This prototype also looks at the use of the standy pool as a means of caching large amounts of data to be ready for loading.  This is similar to what the Windows SuperFetch feature does for DLLs, but in this case it is large volumetric data being pre-cached.

Friday, March 27, 2015

word2vec

I've been able to analyze some notes using word2vec, and the extracted "meaning" has some interesting properties.  Maybe next I'll run some archived comments from the Mosaiq Users listserv and see how often users talk about
  • slow loading of images
  • import problems due to inconsistent SRO semantics
  • issues with imported structure sets
  • re-registering the ImageReview3DForm COM control
[which all point to a need for a better image review capability for Mosaiq users.]

WarpTPS

One of the problems to be addressed for adaptive treatment paradigms is the need to visualize and interact with deformable vector fields (DVFs).  While a number of techniques exist for visualizing vector fields, such as heat maps and hedgehog plots, a simple technique is to allow interactive morphing to examine how the vector field is altering the target image to match the source.

WarpTPS Prototype

This is a very old MFC program that allows loading two different PNG images, and then provides a slider to morph back and forth between them.

Note that currently the two images that can be loaded through File > Open Images... must have the same width/height, and are both required to be in .BMP format.

First is a video of Grumpy to Hedgy:


And this, slightly more clinically relevent example, shows one MR slice being morphed on to another one:



Thursday, March 26, 2015

pheonixrt has a new home!

I just recently completed the GitHub migration, due to Google Code shutting down.

https://github.com/dg1an3/pheonixrt

Three projects are represented:
  • pheonixrt is the original inverse planning algorithm based on the convolutional input layer
  • WarpTPS is the interactive morphing using TPSs
  • ALGT is the predicate verification tools
Also, see the references at the end of http://en.wikipedia.org/wiki/Thin_plate_spline for some videos showing WarpTPS.

Sunday, August 25, 2013

How to Fold a JuliaSet

I wrote a JuliaSet animation way back when, that would use a Lissajou curve through the parameter space defining the curves and then draw / erase the resulting fractals.  With color effects, it was quite fun to watch even on a 4.77 MHz PC.

So I was reminded when I saw this very nice WebGL-based animation describing how Julia Sets are generated.  [Note that you really need to view this site in Chrome or Firefox to get the full effect, as it requires WebGL].  If you've only vaguely understood how the Mandelbrot set is produces, or the relationship between Mandelbrot and Julia sets, then it is worth stepping through the visuals to get a very nice description of the complex math that is used and how the iterations actually produce the fractals.
Plus the animated plot points look a lot like my old JuliaB code, but its too bad you can't interactively change the plot points.

Friday, October 28, 2011

ClearCanvas and RT Objects

I'm looking at the source code for ClearCanvas and wondering aloud how hard it would be to modify to store RT objects.  For instance:
  • Current generation DICOM RT SOPs
  • Supplement 74 objects
  • Patient positioning objects
  • Next generation DICOM RT SOPs
Of course, rendering the objects in the Volume.MPR viewer would be especially cool, but one step at a time...maybe registering a replacement for ImageReview3DForm would be a logical next step.

Wednesday, March 16, 2011

My Early Mandelbrot Plots (part II)

and here is another plot made with a non-speckle noise, on a color inkjet printer, circa 1991.


My Early Mandelbrot Plots (part I)

This was a plot made with a speckle noise pattern.  I think the colors were assigned using RGB sinusoidal waves (i.e. the frequency of the R, G, and B waves were varied independently).  Then the actual color was assigned based on a random value.

This was done circa 1991.  I remember there was another plot I had made with purplish colors, that I really liked, but I gave it to someone...



Friday, March 04, 2011

My Work TimeGlider

Having just figured out how to embed.  Now if I could figure how to not show the annoying introduction box...






Thursday, November 04, 2010

Code Hygiene and Medical Devices

I've been thinking about code hygiene of late.  I recall Ekkehard (at the time the VP of engineering for Siemens OCS) giving a talk about code hygiene, arguing that diligence about design and coding was an essential part of device development. This was based on his experience involving a few instances where sloppy coding resulted in malfunctions in the field.

Having experienced similar field issues due to non-hygienic design practices, I do feel quite strongly that meticulous coding and design are an important part of medical device development. I guess you can't always know that a 'sloppy' design will invariably increase the likelihood of a malfunction, but it is a feeling probably somewhat akin to the OCD-like behavior that surgeons often exhibit outside the operating room (i.e. once you've developed a sense of hygienic vs. non-hygienic, its difficult to turn off that sense even in situations where there is a low likelihood of something bad happening).

Saturday, November 14, 2009

LINQ to Fields

I've been looking in to the lambda expression capability in LINQ / C# 3.0, and it occurred to me that it provides the perfect mechanism for implementing a field abstraction that I have been thinking about for a while.  The idea is that an image / volume / region / transform is represented at its most basic level as a lambda which takes a vector is input and produces a scalar, vector, or bool as its output.

The most interesting part is the use of the Expression<> to create expressions trees.  This allows lambdas to be combined in to algebraic expressions to form new lambdas, but the result is an expression tree that can be examined via reflection.  So, one could assemble the lambda that represents some image processing operation, and then an inference could be made to construct a pipeline (for instance, with ITK) that implements the expression.

Then LINQ could be used to construct these expressions, given a set of data objects and possible transformations that could be applied.  Each field entity would have the necessary attributes to describe how it could be combined with other entities, and then the LINQ query would do the combining.

Wednesday, May 16, 2007

Innovation and Risk

Having recently completed the first phase of a new inverse planning algorithm for radiation treatment planning, I have been thinking a lot about the relationship between innovation and risk. Of course, everyone knows that innovation is intrinsically risky, but the question is: what are all of the components of this risk, and can all be mitigated as efficiently as possible?

The commitment of time and resources to the initial development of a new idea is one of the first sources of risk in innovation. This risk can best be managed by following a path that develops the innovation to a suitable state for evaluation, while expending a minimal amount of resources.

But once this point has been reached, there is still further risk that persists, due to the need to couple further development to pragmatic concerns of how the innovation is to be used (productization). While this will consume even more resources, it seems that the current market(s) for innovation could benefit from significant improvements in efficiency of how this risk is mitigated.

For any new innovation, there will always be some early adopters who would be willing to expend some of their own resources toward the productization of a promising new technology. The problem is that this early adopter's risk is not efficiently mitigated, because the adopters themselves seem to only get intangible benefits from this risk. For instance, a high-profile clinic with many researchers will undertake new technologies because it allows their researchers to maintain their status as cutting-edge innovators. This means that, when deciding which innovations to adopt, they will mostly evaluate the likely "halo effect" of being associated with a ground-breaking new technology, which is an intangible benefit that eludes quantitative evaluation. Thus they will tend to be looking for "blockbuster" technology, much like Hollywood makes money mostly on a few blockbuster movies. Smaller independent films need financiers who are more willing to undertake smaller risks for smaller possible benefits, in return for equity interest.

But why can't early adopters in technology also partake in "equity interest" of some sort for their risk? This might encourage more commitment of resources during the productization phase of innovation, which would then make the initial development phase correspondingly less risky as well.

Onnx