Hence, it is available to the marc_deformedGeometry.py script via "mesh_init_postprocessing()" and "mesh_get_unitlength()", so no need for setting the scaling of the displacement vectors explicitly through an option; now displacements and nodal positions are always consistent
"marc_extractData" is a simple variant of the "postResult" script that extracts all data from a macro *.t16 file (quadratic elements not yet supported) and writes it into an asciitable
"addDataToGeometry" searches for corresponding *.txt and *.vtk file in a given directory and adds the data from the *.txt file as SCALARS to the *.vtk file
first script needed an additional function in mesh that returns the corresponding node of a given ip in a specific element
first call damask.core.mesh.mesh_init_postprocessing(meshfilename) to initialize all necessary mesh variables
then damask.core.mesh.mesh_build_cellnodes(nodes) calculates the cellnode positions for a given list of node positions
the meshfile that is needed for the init is created automatically by mesh_init in DAMASK
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
For marc simulations, run
./code/setup/setup_code.sh
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
For marc simulations, run
./code/setup/setup_code.sh
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
changed all remaining routines in f2py to more clever determination of array size (requires f2py >= 2.0)
enabled 3D visualize to work with odd resolution by switching to linear reconstruction
PLEASE NOTE: Redefinition of routines for f2py might cause trouble -> DELETE DAMASK_ROOT/lib/damask/core.so in this case
further changes: added pure statement where possible, polished, unified use of "Q" for "Quaternion" and reordered math to have similar routines together
0: uncorrected, slope per sidelength (physical dimension) e = res/dim
1: corrected by sidelength, slope per unitlength e = res/1
2: corrected such that distance between FPs e = 1
alway regarding the medium length of x,y,z direction
saves to copy same geometry description for different elements that are essentially similar regarding the IP number but differ in total node count.
introduced quadratic tetrahedron (Marc element 127 -- element 157 might also work, but did not perform well in fully elastic calc so far)
f2py functions remaining in math.f90 now uses assumed size arrays in order to have simpler interfaces. This is only working with python 2.7!
changed python pre- and postprocessing scripts.
If you encounter any problems whith core modules, try to remove the old core.so in the lib/damask
prevented FEsolving from potentially write to a none existing file
started to introduce petsc into the make chain (nothing happens if PETSC_DIR is not set)
Mainly affected modules are IO and mesh. Most of the changes in mesh result from reordering the functions when grouping them depending on their solver.
Further advantage is that FE solver do not need FFTW and kdtree2 anymore. The include files for these two libraries moved to DAMASKROO/lib now as I figured out how to use a include path in the Makefile.
Put all the files I got when testing compilation with abaqus in a folder which to become the abaqus compilation test.
changed order of arrays in nearest neighbor search to make it fortran fast
constitutive.f90 and homogenization.f90 write state size out during initialization
setup/setup_processing.py is using byterecl to be compatible with binary files written out by solver
removed cut_off parameter for damask_spectral
removed outpot of derived divergence measures and added RMS output in brackets
added comments and options to the makefile
- periodicity in x and z direction for marc:
$damask periodic x z
- periodicity in y direction for abaqus:
**damask periodic y
- periodicity in x and y direction for spectral:
periodic y x
added compiler switches for gfortran and ifort to check for standard conformity
old gnu compilers <4.4 are not longer supported because they don't provide the c binding for fftw
- removed unnecessary "return" before end of subroutine or function:
- changed undetermined array length (:) to (1:3)
To prevent problems with some code analysing tools:
- "3D oneliner loops" (with ";) only for "do" and "enddo" at the same time
- removed line continuation in OMP statements
made the makefile more flexible, removed heap-arrays switch
* Marc: node displacements are added to initial node coordinates (mesh_node0) to get current node positions (mesh_node), then ip coordinates are deduced
* Abaqus: ip coordinates are directly updated, no update of node coordinates!
* Spectral: for the moment no update of either ip or node coordinates! passing only dummy values with initial ip coordinates
0 : only version infos and all from "hypela2"/"umat"
1 : basic outputs from "CPFEM.f90", basic output from initialization routines, debug_info
2 : extensive outputs from "CPFEM.f90", extensive output from initialization routines
3 : basic outputs from "homogenization.f90"
4 : extensive outputs from "homogenization.f90"
5 : basic outputs from "crystallite.f90"
6 : extensive outputs from "crystallite.f90"
7 : basic outputs from the constitutive files
8 : extensive outputs from the constitutive files
If verbosity is equal to zero, all counters in debug are not set during calculation (e.g. debug_StressLoopDistribution or debug_cumDotStateTicks). This might speed up parallel calculation, because all these need critical statements which extremely slow down parallel computation.
In order to keep it like that, please follow these simple rules:
DON'T use implicit array subscripts:
example: real, dimension(3,3) :: A,B
A(:,2) = B(:,1) <--- DON'T USE
A(1:3,2) = B(1:3,1) <--- BETTER USE
In many cases the use of explicit array subscripts is inevitable for parallelization. Additionally, it is an easy means to prevent memory leaks.
Enclose all write statements with the following:
!$OMP CRITICAL (write2out)
<your write statement>
!$OMP END CRITICAL (write2out)
Whenever you change something in the code and are not sure if it affects parallelization and leads to nonconforming behavior, please ask me and/or Franz to check this.