It looks like you're new here. If you want to get involved, click one of these buttons!
Hi,
My current case would encounter OASIS >500G, and for Klayout, when I do layout.read(), the memory basically uses >10x the oasis file size.
Is there a way to do streaming or another technique to deal with such huge memory usage scenario?
thx
Comments
Hi @ryanke,
First thing make sure you're using non-editable Layout objects. Editable layout object will take significantly more memory. When you create the Layout object, use
Second, no, there is no stream mode. Stream mode and OASIS is an ugly combination as OASIS allows forward references and has some design flaws that make truly serial stream mode implementations difficult.
But maybe with a file size like that I don't think KLayout is the right choice for you. You may be looking for a scalable commercial solution with GPU acceleration and all the other fancy stuff.
Matthias
This suggests to me, maybe some utility which could
scrape through "bloaty" database formats and cell by
cell, bottom up, "pack down" cells to use those least-
memory "styles"?
Proving all that, lossless and error free could be quite
a challenge as its application is with databases too
large to load. Might have to do that verifying on the
Big Iron end with a blowback DB.
Curiosity kills the cat: why would an oasis file be this big ?
It could be fractured data for mask making.
If yes, does it contain data for more than one mask ?
Or is it a full reticle ?
Does the oasis file take advantage of CBLOCK's ?
Besides flattening and fracturing, the other thing that can blow up the size of an oasis file is when it contains instance and/or net-names as either TEXT or PROPERTY.
https://github.com/klayoutmatthias/dump_oas_gds2 can tell you.
And more importantly, what do you want to do with the file ?
@Matthias thanks for your respond, I do use the non-editable mode.
@StefanThiede thanks for your respond as well. My workflow is quite simple, I read the OASIS then use multi_clip to split it into small oasis, but the read explode my RAM.
So I am eager to know if there's method to reduce the peak memory when reading.
Hi @ryanke,
As a rough guess, OASIS without CBLOCK compression takes about 3 times the file size as memory for loading, with CBLOCK compression that factor can be bigger - maybe 6 times. That is for 32bit coordinates. With 64bit coordinates that is even more. I assume you have CBLOCK compression, so for loading 500G CBLOCK-compressed OASIS, I guess you would need 3 to 4TB of RAM.
You can bring that down if your file is multi-layered. You can read the file layer by layer by selecting a single layer in each pass, perform the clips and afterwards merging the clips back into a single file.
Depending on your technology node, a full layer stack has maybe 5 to 20 layers with high and similar complexity, and assuming that the per-layer information contributes to maybe 90% of the memory and hierarchy information to the remaining 10%, 1 TB of memory should be enough. But that strongly depends on the nature of the file.
The worst case is flat, single-layer data. In that case, a serial approach would work well, but that is not something KLayout offers out of the box. In a hierarchical case, you would need a two-pass tool that analyzes the hierarchy in the first pass, plans the clip, and executes this plan in serial and memory efficient way. Such a tool can be built, but that is definitely not in the scope of a free and open source project - I mean given that the mask cost in a recent node (and 500G indicates that) is a six-digit number, you should be in a position to spend a few bucks on a proper tool.
Matthias