• This forum is the machine-generated translation of www.cad3d.it/forum1 - the Italian design community. Several terms are not translated correctly.

in cad for multicore processors (antidiluviano)

geofabiuz

Guest
Bye to all,
I am here to ask a yearly question that probably already many of you have placed and that in the end, in the absence of answers and solutions, they have left to decay.
I can't understand why in 2015 there is still no cad that in basic design 2d, i.e. the basis of all operations, doesn't exploit the multicore capabilities of modern processors but see a modern processor at 4, 6, 8, 10 etc core as a dualcore! and from there it does not crash with the consequent relapses in terms of slowness and inefficiency of the program that casts the calculation capabilities of the machine.
What prevents researchers from optimizing the calculation from serial to parallel, how did they do for the graphic rendering part? commercial choices or mathematical limits?
I tried to document myself a bit and I also have my idea about it but I cannot believe that the big producers of cad can not go beyond this barrier that has become a real bond. the hw that manages to use a cad today is largely that of a car of 6-7 years ago and this is absurd!!!!
Obviously this also concerns the gis programs etc. that is all that is based on management of points with coordinates.
I am interested in your ideas about it.:mad:

Hello and thank you
a frustrated and very dissatisfied cadist!
 
Bye to all,
I am here to ask a yearly question that probably already many of you have placed and that in the end, in the absence of answers and solutions, they have left to decay.
I can't understand why in 2015 there is still no cad that in basic design 2d, i.e. the basis of all operations, doesn't exploit the multicore capabilities of modern processors but see a modern processor at 4, 6, 8, 10 etc core as a dualcore! and from there it does not crash with the consequent relapses in terms of slowness and inefficiency of the program that casts the calculation capabilities of the machine.
What prevents researchers from optimizing the calculation from serial to parallel, how did they do for the graphic rendering part? commercial choices or mathematical limits?
I tried to document myself a bit and I also have my idea about it but I cannot believe that the big producers of cad can not go beyond this barrier that has become a real bond. the hw that manages to use a cad today is largely that of a car of 6-7 years ago and this is absurd!!!!
Obviously this also concerns the gis programs etc. that is all that is based on management of points with coordinates.
I am interested in your ideas about it.:mad:

Hello and thank you
a frustrated and very dissatisfied cadist!
I understand your experiences:)
In my opinion it is not the capacity of researchers but in my opinion it is that companies spend less and less money in search of technology that is going increasingly towards retirement.
the greater commitment now goes more and more towards the 3d technicalology that allows greater functionality and flexibility of the models: rendering, fem assiemi cam etc...

I have already read that the iso is armed to insert regulations for the quotation of 3d models so as to let go more and more put in the table and to arrive to send to suppliers the models cad...
 
some problems are not related to "commercial strategies".

for the parametric cad 3d definitely is and it will be impossible to exploit the multicore cpus.

the reason lies in the way the model is built. to exploit the multicore there must be operations that can be performed in parallel, that is the result of one is completely independent from the result of the other.

Now, we imagine a perforated plate: in a parametric volume depends on the section that is extruded along a direction. the position of the hole depends on the body of the plate, does not calculate the hole before because it would not even have the references to place it.

therefore the cad simply flows the feature tree and is forced to do it at single steps, one after the other, of father to son. impossible to do it in parallel.

if instead you do fem analysis or rendering you are sure that the cpu is saturated and all the cores made work 100%.

for non-parametric cads perhaps the use of multicore is feasible: You should ask those who have created direct or similar if when charging a large set the cpu is exploited thoroughly.

for the 2d you can probably use the calculation in parallel but maybe the cads are not optimized because it is not a burdensome task for calculation.
 
All right.

besides fem and rendering, another example (at least for creo) is loading and possible (if you enabled compression) disappearing in memory of compressed files; files can be processed without a precise order and then processed by multiple cores. just everything is in memory, part regeneration and that uses only one core. I know that this improvement has been inserted a few versions ago, does not squirt just 100% cpus but a little should help.
 
long ago I had put a link to run a system benchmark for those who have wildfire or creo.

is located on:
http://www.proesite.com/ocusb6/ocusb6.htm

sono testate tutte le aree del programma:

the ocus benchmark version 6 performs the following tasks:

retrieve generic assembly (cp)
retrieve assembly 1 (cp)
retrieve large assembly (gr)
250 wireframe view redraws (gr)
120 wireframe view redraws with datums on (gr)
5 hidden view redraws (gr)
200 hidden view redraws with fast hlr (gr)
250 shaded mouse spins (gr)
60 shaded mouse spins with reflection (gr)
250 shaded view redraws (gr)
150 shaded view redraws with edges (gr)
15 shaded pan and zoom (gr)
18 wireframe mouse zooms (gr)
initiate advanced shaded mode (cp)
8 advanced shaded mouse zooms (gr)
25 very advanced shaded spins (gr)
10 save jpeg (cp)
end advanced shaded mode (cp)
30 screen translates (gr)
50 automatic regenerates (cp)
80 perspective views (gr)
8 mass prop calculations (cp)
18 global interference checks (cp)
4 iges exports (cp+di)
6 step exports (cp+di)
4 drawing creations (cp)
4 regen views hidden line (cp)
4 regen views no hidden (cp)
4 pdf file creations (cp+di)
4 dxf file creations (cp+di)
erase all from memory (mem)

in pratica viene fornito un file trail che fa fare al cad tutte queste operazioni e registra i tempi di esecuzione.
 
geofabiuz, I see from the signature you use acadmap.

see if these two links can be useful to you:
http://forums.autodesk.com/t5/autoc...ap-201x-64-bits-and-memory-usage/td-p/4737025
http://help.autodesk.com/view/acdlt/2015/enu/?guid=guid-6170c0ec-0910-46a6-81df-de0a888573a0discussing then in general I know how autocad 2015, years ago I used 2011. autocad 2011 was definitely not optimized. on sitemi a 32bit closed shop when the ram used arrived at about 1.2 giga. and in general used a disproportionate amount of ram, 300/400 mega only to open very simple tables.

but all in all it was also logical: in 2011 a 32bit pc did not find it on the market. why optimize it for platforms that were already dead?

when I used wildfire iv 32bit, doing fem analysis, the maximum 2giga addressing if he took them all. I did analysis that they used 1.8, 1.9 giga and the system remained stable. same thing for modeling, he managed to exploit all the memory addressable by a single process under 32bit systems.
 
Last edited by a moderator:
....

therefore the cad simply flows the feature tree and is forced to do it at single steps, one after the other, of father to son. impossible to do it in parallel.

...
Sure about this?
I imagine the cad performs instructions similar to those of the macros. Now, in macros (not always) there are also for/next commands where you have a series of cycles. What would prevent the "cycles" of a processor and the "cycles" of another?
could not be, the single feature that visually presents itself as "inseparable" , a series of instructions that can well be elaborated in parallel?
I wonder why I read that some cads are preparing versions that exploit multiprocessor. I have no idea whether this is related to the fact that, including within the same cad and fem system, they need to implement the multiprocess, although on the cad side does not bring advantages.
could not be that, until now, the multiprocessor has not been developed because there were other limits, such as the graphics card, the system memory, the fixed disk that limited the system altogether?
 
I confirm what painaz says.
I know well enough nx developers and with them I exchanged some ideas about it: regeneration is a sequential and unparalleled process.
only operations that can be paralleled are:
- Boolean operations between different body
- hlr
- calculations
- static and dynamic rendering
and little more
 
and then, the time lost to wait for the regeneration of the model as it affects in the daily design time, i.e. how long do we use to think what to do, and how long to wait for the computer to perform it? who designs I think has this relationship like 99:1, who makes fem or rendering maybe 1:10.
 
...
- Boolean operations between different body
...
You can do it.
Thus, in multibody environments, such as nx and v5/v6, nothing prohibits that 2 bodys affected by a common change are updated by two separate processes?

Anyway, I think something moves. Perhaps not as hoped, but it seems to me to move, not only for rendering and cae:
http://www.spatial.com/blog/3d-modeling/leveraging-multi-core-hardware-your-applicationI think that if dassault does, siemens and others can do it. not that it is useful to 50 % of users. Maybe just 20%.

and then, the time lost to wait for the regeneration of the model as it affects in the daily design time, i.e. how long do we use to think what to do, and how long to wait for the computer to perform it? who designs I think has this relationship like 99:1, who makes fem or rendering maybe 1:10.
depends on what you do. If you make a change to a machine at the origin (in a logical sense) and it affects, in parallel, 100 "mythromacchine", the fact of doing so in parallel may save you 20-30 minutes a day that can be used for another.
 
You can do it.
Thus, in multibody environments, such as nx and v5/v6, nothing prohibits that 2 bodys affected by a common change are updated by two separate processes?

Anyway, I think something moves. Perhaps not as hoped, but it seems to me to move, not only for rendering and cae:
http://www.spatial.com/blog/3d-modeling/leveraging-multi-core-hardware-your-applicationI think that if dassault does, siemens and others can do it. not that it is useful to 50 % of users. Maybe just 20%.




depends on what you do. If you make a change to a machine at the origin (in a logical sense) and it affects, in parallel, 100 "mythromacchine", the fact of doing so in parallel may save you 20-30 minutes a day that can be used for another.
Have fade.
nx already does it... if the model is multibody.
nx11 will expand and optimize these capabilities.
short: in September the beta
 
from the website archicad http://www.graphisoft.com/archicad/archicad-19/overview/
“archicad 19 is now faster than ever! no more waiting for views to load. graphisoft has extended its robust 64-bit and multi-processing technologies”

dal quale capisco che archicad ha migliorato la tecnologia multi porcessore. varrà anche per i multi core? quindi qualcosa è possibile fare?
 
I understand your experiences:)
In my opinion it is not the capacity of researchers but in my opinion it is that companies spend less and less money in search of technology that is going increasingly towards retirement.
the greater commitment now goes more and more towards the 3d technicalology that allows greater functionality and flexibility of the models: rendering, fem assiemi cam etc...

I have already read that the iso is armed to insert regulations for the quotation of 3d models so as to let go more and more put in the table and to arrive to send to suppliers the models cad...
Excuse me, but does the 3d model not always realize it from a 2d model? and so until the rendering that is the "dress" of your design, are you not always in the same place, limit and problem?
Moreover to "parallelize the "d" it is not possible to divide the space of the design in volumes as many as the processors are and assign to each the regeneration and then reassemble everything at the end when all the volumes were generated? I don't know if I explained...

Do you really think that acadmap 2011 and 2015 64bit have such different features??

I believe there is an inertia on the part of the researchers: Does anyone know how to indicate an existing 64bit cad mutithreading?
 
Excuse me, but does the 3d model not always realize it from a 2d model?
no, in modern parametrics in each part file two types of information are saved (and others of course, but now we talk about these two):

1) the "geometria" naked and raw, that is the definition of all the geometric entities that constitute the model: spheres, cylinders, planes, cones, paraboloids, etc etc etc. etc.

2) the "sequence" of the commands that the designer gave to arrive to that final result.

when the designer performs a change at a certain step of the modeling, for example to move a hole or change a chord, the cad follows the whole sequence from that point to the end, and at the end of the operation the naked and raw gemoetria is rewritten in the file, which is what you see at video.

It is therefore understood that parallelizing this is difficult, because by its nature it is sequential, and others, it is understood that by models in which the modeling steps are hundreds or thousands of regeneration following a change is a very long job.
 
no, in modern parametrics in each part file two types of information are saved (and others of course, but now we talk about these two):

1) the "geometria" naked and raw, that is the definition of all the geometric entities that constitute the model: spheres, cylinders, planes, cones, paraboloids, etc etc etc. etc.

2) the "sequence" of the commands that the designer gave to arrive to that final result.

when the designer performs a change at a certain step of the modeling, for example to move a hole or change a chord, the cad follows the whole sequence from that point to the end, and at the end of the operation the naked and raw gemoetria is rewritten in the file, which is what you see at video.

It is therefore understood that parallelizing this is difficult, because by its nature it is sequential, and others, it is understood that by models in which the modeling steps are hundreds or thousands of regeneration following a change is a very long job.
How come you continue to retrace the sequence of commands instead of generating a new position of commands once the design is accepted (can't you do the so-called undo) ?
 

Forum statistics

Threads
44,997
Messages
339,767
Members
4
Latest member
ibt

Members online

No members online now.
Back
Top