temperature is stored in crystallite, but homogeneous on one IP (not an component (grain) quantity and an input value parsed in by the BVP solver.
introduced heat, a component (grain) quantity which is homogenized before returned to the heat transfer solver.
went ahead with removal of dummy functions in homogenization and constitutive, this time mainly reduced function signatures to reflect actually needed quantities.
saves to copy same geometry description for different elements that are essentially similar regarding the IP number but differ in total node count.
introduced quadratic tetrahedron (Marc element 127 -- element 157 might also work, but did not perform well in fully elastic calc so far)
added some OMP FLUSH statements were necessary
replaced openmp do by forall construct where possible; this is much safer and perhaps even as fast for small loops
changed order of arrays in nearest neighbor search to make it fortran fast
constitutive.f90 and homogenization.f90 write state size out during initialization
setup/setup_processing.py is using byterecl to be compatible with binary files written out by solver
removed cut_off parameter for damask_spectral
removed outpot of derived divergence measures and added RMS output in brackets
added comments and options to the makefile
added compiler switches for gfortran and ifort to check for standard conformity
old gnu compilers <4.4 are not longer supported because they don't provide the c binding for fftw
* Also added some more openmp directives to increase percentage of parallelized code.
* "implicit none" was missing in two subroutines of homogenization and constitutive.
0 : only version infos and all from "hypela2"/"umat"
1 : basic outputs from "CPFEM.f90", basic output from initialization routines, debug_info
2 : extensive outputs from "CPFEM.f90", extensive output from initialization routines
3 : basic outputs from "homogenization.f90"
4 : extensive outputs from "homogenization.f90"
5 : basic outputs from "crystallite.f90"
6 : extensive outputs from "crystallite.f90"
7 : basic outputs from the constitutive files
8 : extensive outputs from the constitutive files
If verbosity is equal to zero, all counters in debug are not set during calculation (e.g. debug_StressLoopDistribution or debug_cumDotStateTicks). This might speed up parallel calculation, because all these need critical statements which extremely slow down parallel computation.
In order to keep it like that, please follow these simple rules:
DON'T use implicit array subscripts:
example: real, dimension(3,3) :: A,B
A(:,2) = B(:,1) <--- DON'T USE
A(1:3,2) = B(1:3,1) <--- BETTER USE
In many cases the use of explicit array subscripts is inevitable for parallelization. Additionally, it is an easy means to prevent memory leaks.
Enclose all write statements with the following:
!$OMP CRITICAL (write2out)
<your write statement>
!$OMP END CRITICAL (write2out)
Whenever you change something in the code and are not sure if it affects parallelization and leads to nonconforming behavior, please ask me and/or Franz to check this.
* also put a call to constitutive_microstructure at the start of each crystallite_integration subroutine like it was before. need that for nonlocal model in case of crystallite cutback
* now remembering stiffness similar to how we do it for Lp etc.; avoids undefined stiffness values for nonconverged stiffness calculation
* non-local stuff:
* changed non-local kinetics (Gilman2002)
* enforce zero shearrate for overall carrrier density below relevant density
* enforce zero density for those states that become negative and were below relevant density before
* dislocation velocity is not limited by V^(1/3) / dt anymore
--> new "crystallite" part in config file
--> new "crystallite" option for microstructures
--> new output file "...job.outputCrystallite" to be used in conjunction with marc_addUserOutput for meaningful naming of User Defined Vars.