temperature is stored in crystallite, but homogeneous on one IP (not an component (grain) quantity and an input value parsed in by the BVP solver.
introduced heat, a component (grain) quantity which is homogenized before returned to the heat transfer solver.
went ahead with removal of dummy functions in homogenization and constitutive, this time mainly reduced function signatures to reflect actually needed quantities.
0: uncorrected, slope per sidelength (physical dimension) e = res/dim
1: corrected by sidelength, slope per unitlength e = res/1
2: corrected such that distance between FPs e = 1
alway regarding the medium length of x,y,z direction
introduced PETSc option for debugging that introduces some debugging options into the petsc options and move PETSc initialization from numerics to DAMASK_spectral_utilities
first cut back is tried if material point model oder BVP solver does not converge.
If no regridding is enabled after max cut back, in case of non-converged material point the simulation stops and in case of non-converged BVP solver it continues.
set regridMode to 2 to enable regridding if BVP solver OR materialPoint model do not converge,
set regridMode to 1 to enable regridding if materialPoint model do not converge, non-converged BVP solver will be ignored as in the standard case.
For regridding, the load case need to have a restart freq set.
enabled restarting for Basic PETSc variant
corrected typo in constitutive_j2.f90 that might cause abaqus to crash
now running 20 tests of abaqus in order to have a decent statistic about the crash behavior
improved abaqus_v6.env
Mainly affected modules are IO and mesh. Most of the changes in mesh result from reordering the functions when grouping them depending on their solver.
Further advantage is that FE solver do not need FFTW and kdtree2 anymore. The include files for these two libraries moved to DAMASKROO/lib now as I figured out how to use a include path in the Makefile.
Put all the files I got when testing compilation with abaqus in a folder which to become the abaqus compilation test.
lattice.f90, FEsolving.f90: explicitly defined public functions and variables, all others are now private
numerics.f90: changed output format of real numbers, now instead of 0.1eX 1.0e(X-1) is printed to screen
Makefile: now using correct Optimization flags for OPTIMIZATION=AGGRESSIVE
DAMASK_spectral_AL.f90: improved, but still testing. Stress BCs now seem to be handled correctly
removed simplified_algorthim flag because the basic scheme using the polarization field will not be implemented
introduced divergence_correction flag for making divergence criterion resolution-independent (still experimental and not set by default)
corrected output and restart frequency (now modulo on incs of current load case)
removed cut_off parameter for damask_spectral
removed outpot of derived divergence measures and added RMS output in brackets
added comments and options to the makefile
added compiler switches for gfortran and ifort to check for standard conformity
old gnu compilers <4.4 are not longer supported because they don't provide the c binding for fftw
additional output in DAMASK_spectral_interface.f90
132 character cut off in constitutive_nonlocal.f90
rounding error in math.f90 complex number initialization (1.0_pReal)*2.0_pReal*pi
new $DAMASK_NUM_THREADS warning in numerics.f90 / IO.f90
polishing in DAMASK_spectral.f90
introduced parameters for selective debugging of spectral code and partly introduced the advanced divergence calculation again which is controlled by debug.config
added switch in numerics to control divergence behavior (uncorrected and corrected by phenomenological factor)
added precision directive to all values I found
made convergence independent of size and resolution,
polishing output in DAMASK_spectral.f90
added function to compute eigenvalues without eigenvectors and function to convert a 3x3 logical to a 9 vector in math.f90
removed obsolete variable in numerics.f90
fixed bug in bc_temperature assignment that was hitting memory.
Temperature is taken from the first loadcase and evolves from there in an adiabatic fashion for the moment. I.e. T-specifications from later loadcases are ignored...
In order to keep it like that, please follow these simple rules:
DON'T use implicit array subscripts:
example: real, dimension(3,3) :: A,B
A(:,2) = B(:,1) <--- DON'T USE
A(1:3,2) = B(1:3,1) <--- BETTER USE
In many cases the use of explicit array subscripts is inevitable for parallelization. Additionally, it is an easy means to prevent memory leaks.
Enclose all write statements with the following:
!$OMP CRITICAL (write2out)
<your write statement>
!$OMP END CRITICAL (write2out)
Whenever you change something in the code and are not sure if it affects parallelization and leads to nonconforming behavior, please ask me and/or Franz to check this.
mpie_spectral and numerics: added switch to prevent pre calculation of gamma_hat. slower, but saves memory
3Dvisualize: started to add support for gmsh (not fully working yet)
reconstruct: new version of f2py/Fortran subroutines for output of results from spectral method
removed storage of full cauchy stres field from mpie_spectral.f90, only average is stored now
added cauchy stress and von mises equivalent calculation to spectral post.
numerics: polishing
mpie_cpfem_marc: polishing
..powerlaw: aware of symmetryType function
crystallite: aware of symmetryType function, smaller leapfrog acceleration
IO: new warning 101
CPFEM: range of odd stress is now -1e15...+1e15, H_sym is used for stiffness
added some parameters for spectral method to numerics.f90 (tolerance)
changed error message concerning spectral method in IO.f90
corrected calculation of stress BC in mpie_spectral.f90
* now remembering stiffness similar to how we do it for Lp etc.; avoids undefined stiffness values for nonconverged stiffness calculation
* non-local stuff:
* changed non-local kinetics (Gilman2002)
* enforce zero shearrate for overall carrrier density below relevant density
* enforce zero density for those states that become negative and were below relevant density before
* dislocation velocity is not limited by V^(1/3) / dt anymore