Monday, February 25, 2019

A picture of IEC Table rotations

I have always struggled with a mental picture to help understand how rotation directions work in IEC table coordinates.  As we were looking in to support for decubitus, it occurred to me that rotation directions could be understood in terms of the relationship between orientations that were separated by 90 degrees.

For instance, HFS > HFDL > HFP > HFDR are all related by a 90 degree table roll / transverse rotation, and that the direction corresponds to a positive rotation (clockwise when viewed from the foot of the table).  Likewise, HFS > FFS and HFP > FFP can be viewed as resulting from two 90 degree rotations, with an intermediate stop at a rotation that isn't typically used for treatment (but does generally correspond to stereotactive treatment mode on advanced linacs).

So I assembled this in to a single chart, that I have been checking over to see if it still makes sense:

Sunday, March 25, 2018

CQRS Commands, stateless services, and thin clients

Looking at the CQRS architectural pattern, there are interactions that maybe I didn’t fully understand. Like if the target of queries/commands are the Managers from the IDesign methodology, this kinda has a direct implication on the statefulness of the Manager vs the statelessness of the client. So you need a thin client with little state, and commands are routed to the Manager in the backend, and any updates then come back to the thin client as a response object or update message.

Tuesday, March 06, 2018

Growing out of a legacy application

Trying to grow out of a dated legacy application is no small feat, and there is undoubtedly a cottage industry dedicated to consulting on how to do just that.

The most effective pattern I've encountered for this is the Strangler Application, so named from some kind of tree seen down under.  But it seems that there are better and worse ways of approaching a Strangler application.

I think Fowler's captures the key aspects: a proper Strangler application needs both a static data strategy ("asset capture") and a dynamic behavior strategy ("event interception").  And it is important that both parts need to be involved in a two-way strategy for getting data and events back and forth.  Fowler stresses the strategic importance of this two-way strategy, as it creates flexibility for determining the order of asset capture.  Words to live by...

Monday, March 05, 2018

<retro> What is Image Quality? </retro>

originally on LinkedIn


I saw a talk at AAPM 2015 by Kyle Myers (from the FDA's Office of Science and Engineering Laboratories) about the Channelized Hotelling Observer (Barrett et al 1993), which is a parametric model of visual performance based on spatial frequency selective channels in areas V1 and V2.

She was discussing the use of the CHO model for selecting optimal CBCT reconstruction parameters, but I ran across an obvious application of the technique to the problem of JPEG 2000 parameter selection (Zhang et al 2004). 

It reminded me of the RDP plugin prototype I had developed a few years back, which demonstrated the ability to dynamically adjust compression performance during medical image review on an RDP or Citrix session. My prototype could also benefit from optimizing the CHO confidence interval (though it used JPEG XR instead of JPEG 2000), but possibly more interesting is the ability to predict the presentation parameters to be delivered over the channel, such as zoom/pan, window/level, or other radiometric filters to be applied to the image. It could also be used to determine when certain regions of the parameter space are not feasible for image review, because the confidence interval collapses (for instance, if you set window width=1).

So if anyone asks "What is Image Quality", you can tell them: "Image Quality = Maximized CHO Confidence Interval" (or maximized CHO Area Under Curve). Got it?

P.S. there is very nice Matlab implementation of the CHO model in a package called IQModelo, with some examples of calculating CI and AUC.

Friday, March 02, 2018

Better Image Review Workflows Part III: Analysis

The central purpose of reviewing images is to extract meaning from the images and then base a decision on this meaning.  Currently the human visual system is the only system that is known to be able to extract all the meaning from a medical image, and even then it may need some help.

As algorithms become better at meaning extraction, its important to keep in mind that anything an algorithm can make of an image should, in theory, be able to also be seen by a human, even if the human needs a little help.  This principle can unify the problem of optimal presentation and computer-assisted interpretation, as it provides a method of tuning an algorithm so as to optimize presentation.  I went in an little more detail when I wrote about the FDA's Image Quality research.


Capsules and Cortical Columns


I'm still trying to figure out what to make of the Capsule routing mechanism proposed by Sabour et al late last year. It is potentially a move toward greater biological plausibility, if the claim is that each Capsule corresponds to a cortical column. If the Capsule network does reflect what happens in the visual system, its routing algorithm would need to either develop or learned.

My point of reference here is the Shifter circuit (Olshausen 1993), which never explained how its control weights would have been "learned", despite other biologically plausible characteristics. I've always understood that learning control weights is more difficult than learning filter taps for a convolutional network, or RBM weights. I guess that's why reinforcement learning is so much harder to get right than CNN learning. But I never really thought the Shifter control weights would be learned anyway--I expected they would be the result of developmental processes, just as topographic mappings come from development.

The paper on training the Capsule network doesn't mention consideration of an explicit topographic constraint (like in the Topographic ICA). I wonder how difficult it would be to use a dynamic Self-Organizing Map as a means of organizing the units in the PrimaryCaps layer? Would this simplify the dynamic routing algorithm?

LoFi Ascii Pixel

I'm looking at some 'lofi' image processing in F#, and thought I would begin by publishing as gists (because its very light weight).
I started with a lofi ascii pixel output, so you can roughly visualize an image after you generate it. I will include some test distributions (gabor, non-cartesian stimuli) in a subsequent post.

Monday, January 29, 2018

Teramedica Zero-Footprint Viewer


  • Server-side rendering is probably the easier option for an organization with limited technical resources. The delays are noticeable in the video, though I guess for an initial "Enterprise Imaging" offering it is probably good enough.
  • Web-browser based review is also key to zero footprint viewing
  • One could argue that installing an app doesn't really count as "zero footprint", I guess it does minimize IT resources for deployment
  • Ability to deploy on a luminance-controlled monitor (i.e. in a radiology reading room) is probably also essential--I'm not sure Teramedica has that covered yet
  • How would this compare to a fat client running on a Citrix farm?  Probably somewhat better performance (due to lossy compression, which I guess radiologists are OK with?) but the Citrix farm's scalability would likely not be favorable, especially if OpenGL volume textures were being used for server-side rendering


Monday, August 14, 2017

Better Image Review Workflows Part II: DICOM Spatial Registrations

I remember meeting with our colleagues at Varian's Imaging Lab a few years back, to discuss (among other things) better ways of using DICOM Spatial Registration Objects (SROs) to encode radiotherapy setup corrections.  They had been considering support for some advanced features, like 4D CBCT and dynamic DRR generation.  

It was clear that having a reliable DICOM RT Plan Instance UID and referenced Patient Setup module would be immensely useful in interpreting the setup corrections.   But where would this information be stored?  As private attributes in the CBCT slices?  Or in the SRO itself?  The treatment record contains this relationship for each treatment field, but there is no way of encoding a reference to a standalone setup image within the treatment record (or so I've been told)


Siemens has, for years now, used a DICOM Structured Report to encode offset values, in lieu of an SRO, at least for portal images.  I think skipping the SRO entirely was a mistake, but when I look at the state of RT SROs today, it seems in retrospect that a semantically meaningful Structured Report to pull together the other objects (SRO,  treatment record, CBCT slices) would be quite an advanced over current practice.  Looking at some of the defined IODs for structured reports:
  • Mammography CAD Structured Report
  • Chest CAD Structured Report
  • Acquisition Context Structured Report
it does not seem too far fetched to consider an SR encoding relationships among a CBCT or portal image and their context objects.

I've started (again) reading David Clunie's Structured Report book, in hopes of gaining more insight in what can (or should not) be encoded using SRs.  It seems like a quite powerful capability--I'm wondering why the RT domain has not made more use of it...

Better Image Review Workflows Part I: Zombie Apocalypse

I was reading a bit on the developing RayCare OIS, and wondered why an optimization algorithm company would have an interest in developing an enterprise oncology informatics system.

It doesn't take much to realize that oncology informatics is rife with opportunities for applying optimization expertise.  Analysis of the flood of data is only the most obvious benefit from expertise in machine learning and statistical inference.  But even the bread and butter of oncology informatics--managing daily workflows and data--could benefit from this expertise.

For the data generated by daily workflows to be meaningfully used, the workflows (and supporting infrastructure) must be efficient.  Such an infrastructure needs to support workflows involving various actors (both people and systems), as well as a consistent data model shared by the actors to represent the entities participating in the workflows.

The DICOM Information Model for radiotherapy provides a relatively consistent, if somewhat incomplete (yet), model of how data is produced and consumed during RT treatment.  Such a data model can be used to help streamline workflows, as a semantically consistent data stream can be used as the input to the fabled prefetcher (see Clunie's zombie apocalypse scenario), in order to ensure the right data is available at the right time.  And. if we know enough about the workflows (i.e. what processing needs to be done; what areas of interest are to be reviewed), then we can even prepare this before hand, so when the reviewer picks the next item from the worklist, it is all ready to go.

So if RayCare's OIS can make use of some optimization expertise to build a better autonomous prefetch algorithm (and the associated pre-processing and softcopy presentation state determination algorithms), this would probably be a pretty nifty trick.

But getting all the sources of data to fit in to the DICOM information model, or (even better) fixing the DICOM information model to make it more suited for purpose, seems to be the big hiccup here.  Maybe having RayCare also working on the problem, things will begin to advance.  That's what free markets do, and we all want free markets, right?

A picture of IEC Table rotations

I have always struggled with a mental picture to help understand how rotation directions work in IEC table coordinates.  As we were looking...