CT scan image stack reconstruction: increase resolution locally?


Hi, I have the following question: is it possible to locally set the resolution of a isosurface mesh generated from an image stack? By locally, I mean for example by specifing a box within the bounding box of the image stack.

The situation is the following: I have an image stack from a CT scan with a resolution of 1290 x 690 px per image and 1336 images high. I’m using the Monolith add-on in Grasshopper to generate a model using up to 50% of the available resolution resulting in a mesh of about 2.5 Mio vertices and 4.9 Mio. faces. At this point Rhino starts getting less responsive and a bit cumbersome to work with. However, I don’t even need such a high resolution for the full model, 25% would be enough in this case. What I’m more interested in are some small local features that I would like to see in 100% resolution without all the rest of the model occluding the view of those features. I obviously tried to just mesh boolean/ split the area of interest out of the full model, but it seem a bit counterintuitive to first generate a mesh with millions of faces and then throwing 95% away.

Therefore I’m imagining an approch where I can define a second smaller box inside the main box that acts as a kind of mask or performs a sort of boolean intersection of the MonolithAssemblyGH object with the second box thereby defining the volume where the iso mesh would be generated.

So my question would be, first of all, is something like this possible/ planned to be implemented? And second, what alternative approaches would you see fit for achieving something like what is outlined above? Thank you very much for your responses!



Ok, I managed to do this by using the mask idea. I used the “multiply sources” component to multiply the shape channel that is based on CT-scan data source with a geometric source generated from the bounding box: all the the voxels inside the bounding box have the value 1.0, all the voxels outside have the value 0. This effectively allows me to use the brep box as a mask of the CT-scan.



Nice. An alternative approach might be to simply crop the original images (using the 100% base images) and save them as a separate stack of images… Then create the model from the cropped stack of images. You’d likely need a script or custom action to do this… but there’s plenty of programs that can batch process images.