* Also added some more openmp directives to increase percentage of parallelized code.
* "implicit none" was missing in two subroutines of homogenization and constitutive.
0 : only version infos and all from "hypela2"/"umat"
1 : basic outputs from "CPFEM.f90", basic output from initialization routines, debug_info
2 : extensive outputs from "CPFEM.f90", extensive output from initialization routines
3 : basic outputs from "homogenization.f90"
4 : extensive outputs from "homogenization.f90"
5 : basic outputs from "crystallite.f90"
6 : extensive outputs from "crystallite.f90"
7 : basic outputs from the constitutive files
8 : extensive outputs from the constitutive files
If verbosity is equal to zero, all counters in debug are not set during calculation (e.g. debug_StressLoopDistribution or debug_cumDotStateTicks). This might speed up parallel calculation, because all these need critical statements which extremely slow down parallel computation.
In order to keep it like that, please follow these simple rules:
DON'T use implicit array subscripts:
example: real, dimension(3,3) :: A,B
A(:,2) = B(:,1) <--- DON'T USE
A(1:3,2) = B(1:3,1) <--- BETTER USE
In many cases the use of explicit array subscripts is inevitable for parallelization. Additionally, it is an easy means to prevent memory leaks.
Enclose all write statements with the following:
!$OMP CRITICAL (write2out)
<your write statement>
!$OMP END CRITICAL (write2out)
Whenever you change something in the code and are not sure if it affects parallelization and leads to nonconforming behavior, please ask me and/or Franz to check this.
* removed input variables in constitutive_collectDotState and constitutive_postResults that are not needed anymore (because of recent changes in constitutive_nonlocal)
* better use SINGLE (having an implicit barrier at the end) instead of MASTER construct
* deleted all explicit BARRIERs after do loops since parallel loop construct implies barrier at the end
* had to add some BARRIER constructs
* only the master thread is allowed to increase the state counter
yet parallelization seems not to give a significant decrease in calculation time with nonlocal model (because of too many CRITICAL statements?)
* also put a call to constitutive_microstructure at the start of each crystallite_integration subroutine like it was before. need that for nonlocal model in case of crystallite cutback
numerics: polishing
mpie_cpfem_marc: polishing
..powerlaw: aware of symmetryType function
crystallite: aware of symmetryType function, smaller leapfrog acceleration
IO: new warning 101
CPFEM: range of odd stress is now -1e15...+1e15, H_sym is used for stiffness
* in Fixed Point Iteration: update dependent states after state preguess was missing; on the other hand, the first call to constitutive_microstructure was obsolete
* now remembering stiffness similar to how we do it for Lp etc.; avoids undefined stiffness values for nonconverged stiffness calculation
* non-local stuff:
* changed non-local kinetics (Gilman2002)
* enforce zero shearrate for overall carrrier density below relevant density
* enforce zero density for those states that become negative and were below relevant density before
* dislocation velocity is not limited by V^(1/3) / dt anymore
2) local stiffness calculation is now standard for non-local grains
3) stressLoopDistribution discriminates between (a) central solution and (b) stiffness perturbation
4) debugger is switched on as standard... (but verboseDebugger not!)
rather perturb all components at once (and optionally decrease the frequency of the Jacobian update with the iJaco parameter) than perturbing only a single component per cycle
- grainrotation calculation now is done with symmetryID 0, i.e. without symmetry reduction since we want the absolute misorientation.
- While math has everything in radians, post results eulerangles and axisangle are given in degrees.
And: grain rotation seems OK after the previous changes in math module.