introduced PETSc option for debugging that introduces some debugging options into the petsc options and move PETSc initialization from numerics to DAMASK_spectral_utilities
Removed "leapfrogging" (increase of step for next guess, when last guess was ok); Replaced Armijo rule testing for step size by simple check if the residuum got better, since the former virtually did not have any effect; consistently using the 2-norm of the residuum rather than infinity-norm for the convergence check throughout the function
lattice.f90, FEsolving.f90: explicitly defined public functions and variables, all others are now private
numerics.f90: changed output format of real numbers, now instead of 0.1eX 1.0e(X-1) is printed to screen
Makefile: now using correct Optimization flags for OPTIMIZATION=AGGRESSIVE
DAMASK_spectral_AL.f90: improved, but still testing. Stress BCs now seem to be handled correctly
added compiler switches for gfortran and ifort to check for standard conformity
old gnu compilers <4.4 are not longer supported because they don't provide the c binding for fftw
-removed to long lines
-restructured f2py modules and merged make_DAMASK2Python into setup processing
-setup_code.py now sets library path in makefile and asks for compile switches for spectral code
-substituted \ in format strings with $
restructured DAMASK_spectral:
-more logical output and structure of code
-better input for spectral debug parameters
introduced parameters for selective debugging of spectral code and partly introduced the advanced divergence calculation again which is controlled by debug.config
added switch in numerics to control divergence behavior (uncorrected and corrected by phenomenological factor)
added precision directive to all values I found
* replaced "dble" intrinsic function by "real" with pReal kind in constitutive_nonlocal.f90
* removed useless line breaks in output of state in CPFEM.f90
0 : only version infos and all from "hypela2"/"umat"
1 : basic outputs from "CPFEM.f90", basic output from initialization routines, debug_info
2 : extensive outputs from "CPFEM.f90", extensive output from initialization routines
3 : basic outputs from "homogenization.f90"
4 : extensive outputs from "homogenization.f90"
5 : basic outputs from "crystallite.f90"
6 : extensive outputs from "crystallite.f90"
7 : basic outputs from the constitutive files
8 : extensive outputs from the constitutive files
If verbosity is equal to zero, all counters in debug are not set during calculation (e.g. debug_StressLoopDistribution or debug_cumDotStateTicks). This might speed up parallel calculation, because all these need critical statements which extremely slow down parallel computation.
In order to keep it like that, please follow these simple rules:
DON'T use implicit array subscripts:
example: real, dimension(3,3) :: A,B
A(:,2) = B(:,1) <--- DON'T USE
A(1:3,2) = B(1:3,1) <--- BETTER USE
In many cases the use of explicit array subscripts is inevitable for parallelization. Additionally, it is an easy means to prevent memory leaks.
Enclose all write statements with the following:
!$OMP CRITICAL (write2out)
<your write statement>
!$OMP END CRITICAL (write2out)
Whenever you change something in the code and are not sure if it affects parallelization and leads to nonconforming behavior, please ask me and/or Franz to check this.
* now remembering stiffness similar to how we do it for Lp etc.; avoids undefined stiffness values for nonconverged stiffness calculation
* non-local stuff:
* changed non-local kinetics (Gilman2002)
* enforce zero shearrate for overall carrrier density below relevant density
* enforce zero density for those states that become negative and were below relevant density before
* dislocation velocity is not limited by V^(1/3) / dt anymore
2) local stiffness calculation is now standard for non-local grains
3) stressLoopDistribution discriminates between (a) central solution and (b) stiffness perturbation
4) debugger is switched on as standard... (but verboseDebugger not!)