It looks like you're new here. If you want to get involved, click one of these buttons!
The same rendering area box_to_render produces different images when defined with different pixel sizes. How can we ensure that the output more accurately reflects the physical dimensions?
pixel_size = pya.DVector(0.0175,0.0175)
shapes_to_render = pya.Region(cell.begin_shapes_rec_touching(layer, box_to_render))
um_to_dbu = pya.DCplxTrans(ly.dbu).inverted()
box_to_render_dbu = um_to_dbu * box_to_render
pixel_size_dbu = um_to_dbu * pixel_size
nx = round(box_to_render_dbu.width() / pixel_size_dbu.x)
ny = round(box_to_render_dbu.height() / pixel_size_dbu.y)
areas = shapes_to_render.merged().rasterize(
box_to_render_dbu.p1,
pixel_size_dbu,
pixel_size_dbu,
nx,
ny
)
denseFactor = 4
pixel_size = pya.DVector(0.0175 / denseFactor ,0.0175 / denseFactor)
box_to_render_dbu = um_to_dbu * box_to_render
pixel_size_dbu = um_to_dbu * pixel_size
nx = round(box_to_render_dbu.width() / pixel_size_dbu.x)
ny = round(box_to_render_dbu.height() / pixel_size_dbu.y)
areas = shapes_to_render.merged().rasterize(
box_to_render_dbu.p1,
pixel_size_dbu,
pixel_size_dbu,
nx,
ny
)
Comments
A just took a look at these images, but they look fine to me.
Also the code seems reasonable. But the edges of the polygon are on a 1µm grid, not on a 17.5nm raster. Hence there will always be pixels where the polygon half overlaps, hence rendering grayscale values. And these pixels will be different, depending on the resolution and location of the rasterization grid. That is implied by the term "rasterization".
What exactly do you need?
Matthias
Hello, thank you for your response. I believe that rendering the same physical dimensions should result in the same output image area. Why do the images rendered at 17.5 nm and 17.5/4 nm differ? What settings are required to ensure that the output rendered image represents a physical area of 17.75 µm? x = 12
y = 0
physice_width = x + 17.75
physice_height = y + 17.75
box_to_render = pya.DBox(x, y, physice_width,physice_height)
@Matthias It appears that the image rendered at 17.5 nm is closer to the actual physical area of 17.75 µm. Why is there such a significant discrepancy in the image rendered at 17.5/4 nm?
First of all, 17.5nm / 4 is 4.375. I doubt this can be represented precisely in multiples of database units, except if your DBU is 0.125. Is it?
If not, "pixel_size_dbu" will not reflect your desired pixel dimension as this value is in integer multiples of the database unit.
So for example, when your DBU is 0.1nm, "pixel_size_dbu" will be 44 as this is what you get when you express 4.375 in integer multiples of 0.1, rounding up. This will effectively give you 4.4nm instead of 4.375nm. 17.5nm on the other hand, can be expressed as multiples of 0.1 (175).
Matthias
Hi Matthias, Does DBU need to be obtained through a GDS file? For example (dbu = layout.dbu, um_to_dbu = pya.DCplxTrans(ly.dbu).inverted()), or can it be set arbitrarily in the code? If um_to_dbu can be set in the code, then pixel_size_dbu can be an integer (pixel_size_dbu = um_to_dbu * pixel_size).
In this code, pixel_size = pya.DVector(0.0175, 0.0175), um_to_dbu = 1000, and pixel_size_dbu = 17.5 is not an integer. How can I set it up to achieve an actual physical pixel size of 17.5nm? Can the rendering window and pixel_size_dbu be multiplied by different um_to_dbu values? For example, using um_to_dbu = 1000 when calculating the rendering window to align with the coordinate scale in the gds file, and using um_to_dbu = 17 / 17.5 when calculating pixel_size_dbu.
My ultimate goal is to render the region (12, 0; 12+17.75, 17.75) based on the C.gds file, using a pixel size of 0.0175um in actual physical dimensions, and achieve a rendered image resolution of 1024*1024.
@Matthias Hi Matthias, I tried it out, and it seems like I can only modify the DBU value of the GDS file. Why does um_to_dbu have to be an integer? Can it support floating-point numbers instead?
Hi @leo_cy,
Inside GDS, the geometry is stored as integer coordinates. The database unit (DBU) gives the unit in micrometers. In your sample case (C.gds), the database unit is 0.001, which means your polygon vertexes will be on coordinates that are multiples of 1nm.
Correspondingly the transformation of um to DBU corresponds to a multiplication with 1000 - the inverse of that value.
So if you choose a pixel size of 17.5 nm, the geometries can never match precisely to that raster. That is even more the case for the denser grid. Hence, the pixelization will never render pixels that are precisely empty or full.
You can choose a smaller DBU of 0.5nm for example - or 0.125nm for the smaller grid when generating GDS. This will allow you to create geometries that are basically compatible with the 17.5 or 4.375nm raster of your pixels.
But it's up to you to create a geometry that is compatible with the rasterization grid. If you fail to do so, your images will always be blurry.
Matthias
OK, thank you very much for your answer