Daniel Otto de Mentock
c2e78158c8
petsc macros can be omitted with use of preal
2022-12-05 10:38:36 +01:00
Daniel Otto de Mentock
b96576ce93
standardizing double definition across modules
2022-12-01 17:27:30 +01:00
Martin Diehl
0508fa9ec2
flatten solver data layout
...
avoid problem with chunking/compression (only relevant for large
simulations when this feature is used).
In addition, use a unified variable naming: no "_current" for
thermal and damage to follow example of mech.
2022-11-27 17:07:25 +01:00
Sharan Roongta
99673bb865
Merge branch '162-error-stress-bc-grid' into 'development'
...
avoid confusion during reporting
Closes #162
See merge request damask/DAMASK!662
2022-11-24 12:50:39 +00:00
Martin Diehl
deb8ebeb5b
avoid confusion during reporting
...
polarization needs to ensure BC for F and P
2022-11-24 09:47:48 +01:00
Martin Diehl
cad4cbc5d2
circument bug in gfortran
...
associate to strided pointer seems to cause trouble
2022-11-20 23:35:54 +01:00
Martin Diehl
34fb7e921a
use self-documenting code
...
the comments did not anything that was not clear from the
variable/function names
2022-11-20 12:58:50 +01:00
Martin Diehl
ad3c18b29b
avoid use of global variables
2022-11-19 12:24:16 +01:00
Martin Diehl
cb6df618fe
avoid global variables
2022-11-19 11:47:44 +01:00
Martin Diehl
18b8923929
centralize FFTs
2022-11-19 09:37:26 +01:00
Martin Diehl
cd2a21509a
avoid depenencies on global state
...
requires on extra forward FFT pre iteration for basic scheme
2022-11-19 09:01:57 +01:00
Martin Diehl
ce98cfdd5e
padding is handled centrally in the FFT forward routines
2022-11-19 07:58:45 +01:00
Martin Diehl
df5487e1a9
Re-written YAML types
...
Strict typing for YAML
New access pattern requires to specify the expected type, i.e. 'scalar', 'list', or 'dict'. This ensures that the node offers the expected functionality instead of polluting 'tNode' with dummy functions which throw error messages if not overwritten.
The restructuring of the code allows to hierarchically construct methods without much code duplication.
Some aspects of the error messaging system have been improved.
2022-10-25 16:09:36 +00:00
Martin Diehl
2f1904efec
only MPI_f08 is standard conforming
2022-06-21 23:11:22 +02:00
Martin Diehl
b8c3d75700
[skip sc] enforce interfaces (quick fix: declare as external)
2022-06-21 22:59:01 +02:00
Martin Diehl
d5db083fec
more convenient to see all invalid parameters
2022-05-27 00:25:25 +02:00
Martin Diehl
91b71fdff8
systematic naming scheme
2022-04-24 04:45:41 +02:00
Martin Diehl
b80b406ad5
more specific name
...
'interface' can be an interface to anything, 'CLI' is an established
abbreviation for 'command line interface'
2022-04-23 16:09:31 +02:00
Martin Diehl
4ca0ea6af2
avoid linking issues with gfortran+MPI
...
most likely related to the fact that HDF5 uses the old Fortran
inferface, not MPI_f08 as DAMASK
2022-02-05 18:38:06 +01:00
Martin Diehl
762f93d724
following naming convention
2022-01-29 15:30:59 +01:00
Martin Diehl
487912cfb0
following Python notation
2022-01-29 15:14:40 +01:00
Martin Diehl
a86dc322fb
consistently put the check on the next line
2022-01-26 12:18:26 +01:00
Martin Diehl
96fed368ad
name adjustments
2022-01-21 14:51:46 +01:00
Martin Diehl
7bd8452bf8
set return value
2022-01-20 07:56:45 +01:00
Martin Diehl
7b1080fdb7
better and consistent variable name
2022-01-20 07:42:16 +01:00
Martin Diehl
1f86111f57
call SNESSetDM after DMDASNESSetFunctionLocal
...
following example ex5f.F90, seems to resolve segmentation fault
2022-01-19 22:57:22 +01:00
Martin Diehl
29530da579
use correct kind of constants for calls to MPI/PETSc
2022-01-13 13:50:30 +01:00
Martin Diehl
a7417a7ad7
default integer, PETSc integer, and MPI integer might be different
2022-01-13 12:02:33 +01:00
Martin Diehl
1ddf1e5694
support for PETSc with 64bit integers
...
compiles, but untested
2021-12-21 23:53:46 +01:00
Philip Eisenlohr
da9fdf53d2
consistent indentation and line-spacings in reporting
2021-11-15 12:35:44 -05:00
Martin Diehl
4160c4fdb4
fix for parallel HDF5
...
if filters are applied, writing from one process does not work if the
file is opened for parallel write
2021-08-15 13:26:15 +02:00
Martin Diehl
85735605f8
more flexibility for the L in the load case
...
Note that mixed boundary conditions for L introduce an ambiguity.
Consider:
L = [[1.0, x, x],
[ 0, 0, 0],
[ 0, 0, 0]]
P = [[x, 0, 0],
[x, x, x],
[x, x, x]]
What we need is F^(n+1)=F_dot^(n+1) x Delta_t, where F_dot^(n+1) is
F_dot^(n+1)_ij = L_ik F^n_kj.
So component F_11 has contributions from L_12 and L_13. We first assume
L_12=L_13=0 and then choose F^(n+1)_12 and F^(n+1)_13 to get
P_12=P_13=0. This implicitly gives a solution for L_12 and L_13, which
is however only one out of infinitely many.
2021-07-20 07:10:28 +02:00
Martin Diehl
03b7532cc5
numpy.MaskedArray behavior
2021-07-19 23:27:10 +02:00
Martin Diehl
1c1dc9383e
symbolic names
2021-07-19 22:30:20 +02:00
Martin Diehl
f9edeb40a5
descriptive names
2021-07-17 11:50:21 +02:00
Martin Diehl
2a84aa7ae4
obvious, no need for comment
2021-07-16 20:32:21 +02:00
Martin Diehl
136a4b1377
PETSc defines are rather complicated
...
now mpi_f08 can be used on newer PETSc installations if old MPI modules
are not exposed
2021-07-09 18:48:25 +02:00
Martin Diehl
637f78bd52
old name (for PETSc < 3.15)
2021-07-09 14:50:29 +02:00
Martin Diehl
139f2c177a
use MPI_f08 if possible
...
most PETSc installations provide outdated MPI (f90 version)
MPI_COMM_WORLD is now of derived type (Fortran 08 style)
PETSC_COMM_WORLD is the plain integer (f90 style) alias.
Note that HDF5 is assumed to have f90 interfaces
2021-07-08 16:27:37 +02:00
Martin Diehl
58bc6e2ba6
avoid chained inclusions
2021-07-08 14:27:04 +02:00
Martin Diehl
5d0fc4fca3
more meaningful order
...
and intent(out) variables for read are at the front
2021-06-01 16:46:24 +02:00
Martin Diehl
0072ebfa64
polishing
2021-03-27 23:17:04 +01:00
Martin Diehl
7320120c5d
Merge branch 'development' into avoid_data_copy_restart_MPI
2021-03-26 08:58:03 +01:00
Sharan Roongta
fc172921fb
unified citation style continued
2021-03-19 10:41:47 +01:00
Vitesh Shah
4912342b1b
added missing arguments
2021-03-15 11:46:30 +01:00
Vitesh Shah
a59af55f1a
read data by one process and broadcast it
2021-03-15 10:58:59 +01:00
Vitesh Shah
adcb24d2e1
write data of average quantities non parallel
2021-03-10 16:33:02 +01:00
Martin Diehl
4dd99d4c39
solver is selected in load case, not numerics.yaml
2021-02-28 19:13:20 +01:00
Vitesh
d54e49e3bc
restore functionality to write non-parallel
...
not needed at the moment, but in general useful. If PETSc = parallel
should always hold, we can simplify much more
2021-02-22 13:37:21 +01:00
Martin Diehl
e855083964
systematic names
2021-02-11 14:19:04 +01:00