Skip to content

Grid

Bases: VolumetricData

A representation of the charge density, ELF, or other volumetric data. This class is a wraparound for Pymatgen's VolumetricData class with additional properties and methods.

Parameters:

Name Type Description Default
structure Structure

The crystal structure associated with the volumetric data. Represents the lattice and atomic coordinates using the Structure class.

required
data dict[str, NDArray[float]]

A dictionary containing the volumetric data. Keys include: - "total": A 3D NumPy array representing the total spin density. If the data is ELF, represents the spin up ELF for spin-polarized calculations and the total ELF otherwise. - "diff" (optional): A 3D NumPy array representing the spin-difference density (spin up - spin down). If the data is ELF, represents the spin down ELF.

required
data_aug NDArray[float]

Any extra information associated with volumetric data (typically augmentation charges)

None
source_format Format

The file format this grid was created from, 'vasp', 'cube', 'hdf5', or None.

None
data_type DataType

The type of data stored in the Grid object, either 'charge' or 'elf'. If None, the data type will be guessed from the data range.

charge
distance_matrix NDArray[float]

A pre-computed distance matrix if available. Useful so pass distance_matrices between sums, short-circuiting an otherwise expensive operation.

None
Source code in src/baderkit/core/toolkit/grid.py
  48
  49
  50
  51
  52
  53
  54
  55
  56
  57
  58
  59
  60
  61
  62
  63
  64
  65
  66
  67
  68
  69
  70
  71
  72
  73
  74
  75
  76
  77
  78
  79
  80
  81
  82
  83
  84
  85
  86
  87
  88
  89
  90
  91
  92
  93
  94
  95
  96
  97
  98
  99
 100
 101
 102
 103
 104
 105
 106
 107
 108
 109
 110
 111
 112
 113
 114
 115
 116
 117
 118
 119
 120
 121
 122
 123
 124
 125
 126
 127
 128
 129
 130
 131
 132
 133
 134
 135
 136
 137
 138
 139
 140
 141
 142
 143
 144
 145
 146
 147
 148
 149
 150
 151
 152
 153
 154
 155
 156
 157
 158
 159
 160
 161
 162
 163
 164
 165
 166
 167
 168
 169
 170
 171
 172
 173
 174
 175
 176
 177
 178
 179
 180
 181
 182
 183
 184
 185
 186
 187
 188
 189
 190
 191
 192
 193
 194
 195
 196
 197
 198
 199
 200
 201
 202
 203
 204
 205
 206
 207
 208
 209
 210
 211
 212
 213
 214
 215
 216
 217
 218
 219
 220
 221
 222
 223
 224
 225
 226
 227
 228
 229
 230
 231
 232
 233
 234
 235
 236
 237
 238
 239
 240
 241
 242
 243
 244
 245
 246
 247
 248
 249
 250
 251
 252
 253
 254
 255
 256
 257
 258
 259
 260
 261
 262
 263
 264
 265
 266
 267
 268
 269
 270
 271
 272
 273
 274
 275
 276
 277
 278
 279
 280
 281
 282
 283
 284
 285
 286
 287
 288
 289
 290
 291
 292
 293
 294
 295
 296
 297
 298
 299
 300
 301
 302
 303
 304
 305
 306
 307
 308
 309
 310
 311
 312
 313
 314
 315
 316
 317
 318
 319
 320
 321
 322
 323
 324
 325
 326
 327
 328
 329
 330
 331
 332
 333
 334
 335
 336
 337
 338
 339
 340
 341
 342
 343
 344
 345
 346
 347
 348
 349
 350
 351
 352
 353
 354
 355
 356
 357
 358
 359
 360
 361
 362
 363
 364
 365
 366
 367
 368
 369
 370
 371
 372
 373
 374
 375
 376
 377
 378
 379
 380
 381
 382
 383
 384
 385
 386
 387
 388
 389
 390
 391
 392
 393
 394
 395
 396
 397
 398
 399
 400
 401
 402
 403
 404
 405
 406
 407
 408
 409
 410
 411
 412
 413
 414
 415
 416
 417
 418
 419
 420
 421
 422
 423
 424
 425
 426
 427
 428
 429
 430
 431
 432
 433
 434
 435
 436
 437
 438
 439
 440
 441
 442
 443
 444
 445
 446
 447
 448
 449
 450
 451
 452
 453
 454
 455
 456
 457
 458
 459
 460
 461
 462
 463
 464
 465
 466
 467
 468
 469
 470
 471
 472
 473
 474
 475
 476
 477
 478
 479
 480
 481
 482
 483
 484
 485
 486
 487
 488
 489
 490
 491
 492
 493
 494
 495
 496
 497
 498
 499
 500
 501
 502
 503
 504
 505
 506
 507
 508
 509
 510
 511
 512
 513
 514
 515
 516
 517
 518
 519
 520
 521
 522
 523
 524
 525
 526
 527
 528
 529
 530
 531
 532
 533
 534
 535
 536
 537
 538
 539
 540
 541
 542
 543
 544
 545
 546
 547
 548
 549
 550
 551
 552
 553
 554
 555
 556
 557
 558
 559
 560
 561
 562
 563
 564
 565
 566
 567
 568
 569
 570
 571
 572
 573
 574
 575
 576
 577
 578
 579
 580
 581
 582
 583
 584
 585
 586
 587
 588
 589
 590
 591
 592
 593
 594
 595
 596
 597
 598
 599
 600
 601
 602
 603
 604
 605
 606
 607
 608
 609
 610
 611
 612
 613
 614
 615
 616
 617
 618
 619
 620
 621
 622
 623
 624
 625
 626
 627
 628
 629
 630
 631
 632
 633
 634
 635
 636
 637
 638
 639
 640
 641
 642
 643
 644
 645
 646
 647
 648
 649
 650
 651
 652
 653
 654
 655
 656
 657
 658
 659
 660
 661
 662
 663
 664
 665
 666
 667
 668
 669
 670
 671
 672
 673
 674
 675
 676
 677
 678
 679
 680
 681
 682
 683
 684
 685
 686
 687
 688
 689
 690
 691
 692
 693
 694
 695
 696
 697
 698
 699
 700
 701
 702
 703
 704
 705
 706
 707
 708
 709
 710
 711
 712
 713
 714
 715
 716
 717
 718
 719
 720
 721
 722
 723
 724
 725
 726
 727
 728
 729
 730
 731
 732
 733
 734
 735
 736
 737
 738
 739
 740
 741
 742
 743
 744
 745
 746
 747
 748
 749
 750
 751
 752
 753
 754
 755
 756
 757
 758
 759
 760
 761
 762
 763
 764
 765
 766
 767
 768
 769
 770
 771
 772
 773
 774
 775
 776
 777
 778
 779
 780
 781
 782
 783
 784
 785
 786
 787
 788
 789
 790
 791
 792
 793
 794
 795
 796
 797
 798
 799
 800
 801
 802
 803
 804
 805
 806
 807
 808
 809
 810
 811
 812
 813
 814
 815
 816
 817
 818
 819
 820
 821
 822
 823
 824
 825
 826
 827
 828
 829
 830
 831
 832
 833
 834
 835
 836
 837
 838
 839
 840
 841
 842
 843
 844
 845
 846
 847
 848
 849
 850
 851
 852
 853
 854
 855
 856
 857
 858
 859
 860
 861
 862
 863
 864
 865
 866
 867
 868
 869
 870
 871
 872
 873
 874
 875
 876
 877
 878
 879
 880
 881
 882
 883
 884
 885
 886
 887
 888
 889
 890
 891
 892
 893
 894
 895
 896
 897
 898
 899
 900
 901
 902
 903
 904
 905
 906
 907
 908
 909
 910
 911
 912
 913
 914
 915
 916
 917
 918
 919
 920
 921
 922
 923
 924
 925
 926
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
class Grid(VolumetricData):
    """
    A representation of the charge density, ELF, or other volumetric data.
    This class is a wraparound for Pymatgen's VolumetricData class with additional
    properties and methods.

    Parameters
    ----------
    structure : Structure
        The crystal structure associated with the volumetric data.
        Represents the lattice and atomic coordinates using the `Structure` class.
    data : (dict[str, NDArray[float]])
        A dictionary containing the volumetric data. Keys include:
        - `"total"`: A 3D NumPy array representing the total spin density. If the
            data is ELF, represents the spin up ELF for spin-polarized calculations
            and the total ELF otherwise.
        - `"diff"` (optional): A 3D NumPy array representing the spin-difference
          density (spin up - spin down). If the data is ELF, represents the
          spin down ELF.
    data_aug : NDArray[float], optional
        Any extra information associated with volumetric data
        (typically augmentation charges)
    source_format : Format, optional
        The file format this grid was created from, 'vasp', 'cube', 'hdf5', or None.
    data_type : DataType, optional
        The type of data stored in the Grid object, either 'charge' or 'elf'. If
        None, the data type will be guessed from the data range.
    distance_matrix : NDArray[float], optional
        A pre-computed distance matrix if available.
        Useful so pass distance_matrices between sums,
        short-circuiting an otherwise expensive operation.
    """

    def __init__(
        self,
        structure: Structure,
        data: dict,
        data_aug: dict = None,
        source_format: Format = None,
        data_type: DataType = DataType.charge,
        distance_matrix: NDArray[float] = None,
        **kwargs,
    ):
        # The following is copied directly from pymatgen, but replaces their
        # creation of a RegularGridInterpolator to avoid some overhead
        self.structure = Structure.from_dict(
            structure.as_dict()
        )  # convert to baderkit structure
        self.is_spin_polarized = len(data) >= 2
        self.is_soc = len(data) >= 4
        # convert data to numpy arrays in case they were jsanitized as lists
        self.data = {k: np.array(v) for k, v in data.items()}
        self.dim = self.data["total"].shape
        self.data_aug = data_aug or {}
        self.ngridpts = self.dim[0] * self.dim[1] * self.dim[2]
        # lazy init the spin data since this is not always needed.
        self._spin_data: dict[Spin, float] = {}
        self._distance_matrix = distance_matrix or {}
        self.xpoints = np.linspace(0.0, 1.0, num=self.dim[0])
        self.ypoints = np.linspace(0.0, 1.0, num=self.dim[1])
        self.zpoints = np.linspace(0.0, 1.0, num=self.dim[2])
        self.interpolator = Interpolator(self.data["total"])
        self.name = "VolumetricData"

        # The rest of this is new for BaderKit methods
        if source_format is None:
            source_format = Format.vasp
        self.source_format = Format(source_format)

        if data_type is None:
            # attempt to guess data type from data range
            if self.total.max() <= 1 and self.total.min() >= 0:
                data_type = DataType.elf
            else:
                data_type = DataType.charge
            logging.info(f"Data type set as {data_type.value} from data range")
        self.data_type = data_type

        # assign cached properties
        self._reset_cache()

    def _reset_cache(self):
        self._grid_indices = None
        self._flat_grid_indices = None
        self._point_dists = None
        self._max_point_dist = None
        self._grid_neighbor_transforms = None
        self._symmetry_data = None
        self._maxima_mask = None
        self._minima_mask = None

    @property
    def total(self) -> NDArray[float]:
        """

        Returns
        -------
        NDArray[float]
            For charge densities, returns the total charge (spin-up + spin-down).
            For ELF returns the spin-up or single spin ELF.

        """
        return self.data["total"]

    @total.setter
    def total(self, new_total: NDArray[float]):
        self.data["total"] = new_total
        # reset cache
        self._reset_cache()

    @property
    def diff(self) -> NDArray[float] | None:
        """

        Returns
        -------
        NDArray[float]
            For charge densities, returns the magnetized charge (spin-up - spin-down).
            For ELF returns the spin-down ELF. If the file was not from a spin
            polarized calculation, this will be None.

        """
        return self.data.get("diff")

    @diff.setter
    def diff(self, new_diff):
        self.data["diff"] = new_diff
        # reset cache
        self._reset_cache()

    @property
    def shape(self) -> NDArray[int]:
        """

        Returns
        -------
        NDArray[int]
            The number of points along each axis of the grid.

        """
        return np.array(self.total.shape)

    @property
    def matrix(self) -> NDArray[float]:
        """

        Returns
        -------
        NDArray[float]
            A 3x3 matrix defining the a, b, and c sides of the unit cell. Each
            row is the corresponding lattice vector in cartesian space.

        """
        return self.structure.lattice.matrix

    @property
    def grid_indices(self) -> NDArray[int]:
        """

        Returns
        -------
        NDArray[int]
            The indices for all points on the grid. Uses 'C' ordering.

        """
        if self._grid_indices is None:
            self._grid_indices = np.indices(self.shape).reshape(3, -1).T
        return self._grid_indices

    @property
    def flat_grid_indices(self) -> NDArray[int]:
        """

        Returns
        -------
        NDArray[int]
            An array of the same shape as the grid where each entry is the index
            of that voxel if you were to flatten/ravel the grid. Uses 'C' ordering.

        """
        if self._flat_grid_indices is None:
            self._flat_grid_indices = np.arange(
                np.prod(self.shape), dtype=np.int64
            ).reshape(self.shape)
        return self._flat_grid_indices

    # @property
    # def interpolator(self) -> RegularGridInterpolator:
    #     if self._interpolator is None:
    #         t0 = time.time()
    #         if self.interpolator_method == "linear":
    #             pad = 1
    #         elif self.interpolator_method == "cubic":
    #             pad = 2
    #         else: # cubic or other
    #             pad = 3
    #         # pymatgen always sets their RegularGridInterpolator with linear interpolation
    #         # but that isn't always what we want. Additionally, I have found some
    #         # padding of the grid is usually required to get accurate interpolation
    #         # near the edges.
    #         x, y, z = self.dim
    #         padded_total = np.pad(self.data["total"], pad, mode="wrap")
    #         xpoints_pad = np.linspace(-pad, x + pad - 1, x + pad * 2) / x
    #         ypoints_pad = np.linspace(-pad, y + pad - 1, y + pad * 2) / y
    #         zpoints_pad = np.linspace(-pad, z + pad - 1, z + pad * 2) / z
    #         self._interpolator = RegularGridInterpolator(
    #             (xpoints_pad, ypoints_pad, zpoints_pad),
    #             padded_total,
    #             method=self.interpolator_method,
    #             bounds_error=True,
    #         )
    #         t1 = time.time()
    #         breakpoint()
    #     return self._interpolator

    # @interpolator.setter
    # def interpolator(self, value):
    #     self._interpolator = value

    # TODO: Do this with numba to reduce memory and probably increase speed
    @property
    def point_dists(self) -> NDArray[float]:
        """

        Returns
        -------
        NDArray[float]
            The distance from each point to the origin in cartesian coordinates.

        """
        if self._point_dists is None:
            cart_coords = self.grid_to_cart(self.grid_indices)
            a, b, c = self.matrix
            corners = [
                np.array([0, 0, 0]),
                a,
                b,
                c,
                a + b,
                a + c,
                b + c,
                a + b + c,
            ]
            distances = []
            for corner in corners:
                voxel_distances = np.linalg.norm(cart_coords - corner, axis=1).round(6)
                distances.append(voxel_distances)
            min_distances = np.min(np.column_stack(distances), axis=1)
            self._point_dists = min_distances.reshape(self.shape)
        return self._point_dists

    @property
    def point_volume(self) -> float:
        """

        Returns
        -------
        float
            The volume of a single point in the grid.

        """
        volume = self.structure.volume
        return volume / self.ngridpts

    @property
    def max_point_dist(self) -> float:
        """

        Returns
        -------
        float
            The maximum distance from the center of a point to one of its corners. This
            assumes the voxel is the same shape as the lattice.

        """
        if self._max_point_dist is None:
            # We need to find the coordinates that make up a single voxel. This
            # is just the cartesian coordinates of the unit cell divided by
            # its grid size
            a, b, c = self.matrix
            end = [0, 0, 0]
            vox_a = [x / self.shape[0] for x in a]
            vox_b = [x / self.shape[1] for x in b]
            vox_c = [x / self.shape[2] for x in c]
            # We want the three other vertices on the other side of the voxel. These
            # can be found by adding the vectors in a cycle (e.g. a+b, b+c, c+a)
            vox_a1 = [x + x1 for x, x1 in zip(vox_a, vox_b)]
            vox_b1 = [x + x1 for x, x1 in zip(vox_b, vox_c)]
            vox_c1 = [x + x1 for x, x1 in zip(vox_c, vox_a)]
            # The final vertex can be found by adding the last unsummed vector to any
            # of these
            end1 = [x + x1 for x, x1 in zip(vox_a1, vox_c)]
            # The center of the voxel sits exactly between the two ends
            center = [(x + x1) / 2 for x, x1 in zip(end, end1)]
            # Shift each point here so that the origin is the center of the
            # voxel.
            voxel_vertices = []
            for vector in [
                center,
                end,
                vox_a,
                vox_b,
                vox_c,
                vox_a1,
                vox_b1,
                vox_c1,
                end,
            ]:
                new_vector = [(x - x1) for x, x1 in zip(vector, center)]
                voxel_vertices.append(new_vector)

            # Now we need to find the maximum distance from the center of the voxel
            # to one of its edges. This should be at one of the vertices.
            # We can't say for sure which one is the largest distance so we find all
            # of their distances and return the maximum
            self._max_point_dist = max(
                [np.linalg.norm(vector) for vector in voxel_vertices]
            )
        return self._max_point_dist

    @cached_property
    def point_neighbor_voronoi_transforms(
        self,
    ) -> tuple[NDArray, NDArray, NDArray, NDArray]:
        """

        Returns
        -------
        tuple[NDArray, NDArray, NDArray, NDArray]
            The transformations, neighbor distances, areas, and vertices of the
            voronoi surface between any point and its neighbors in the grid.
            This is used in the 'weight' method for Bader analysis.

        """
        # I go out to 2 voxels away here. I think 1 would probably be fine, but
        # this doesn't take much more time and I'm certain this will capture the
        # full voronoi cell.
        voxel_positions = np.array(list(itertools.product([-2, -1, 0, 1, 2], repeat=3)))
        center = math.floor(len(voxel_positions) / 2)
        cart_positions = self.grid_to_cart(voxel_positions)
        voronoi = Voronoi(cart_positions)
        site_neighbors = []
        facet_vertices = []
        facet_areas = []

        def facet_area(vertices):
            # You can use a 2D or 3D area formula for a polygon
            # Here we assume the vertices are in a 2D plane for simplicity
            # For 3D, a more complicated approach (e.g., convex hull or triangulation) is needed
            p0 = np.array(vertices[0])
            area = 0
            for i in range(1, len(vertices) - 1):
                p1 = np.array(vertices[i])
                p2 = np.array(vertices[i + 1])
                area += np.linalg.norm(np.cross(p1 - p0, p2 - p0)) / 2.0
            return area

        for i, neighbor_pair in enumerate(voronoi.ridge_points):
            if center in neighbor_pair:
                neighbor = [i for i in neighbor_pair if i != center][0]
                vertex_indices = voronoi.ridge_vertices[i]
                vertices = voronoi.vertices[vertex_indices]
                area = facet_area(vertices)
                site_neighbors.append(neighbor)
                facet_vertices.append(vertices)
                facet_areas.append(area)
        transforms = voxel_positions[np.array(site_neighbors)]
        cart_transforms = cart_positions[np.array(site_neighbors)]
        transform_dists = np.linalg.norm(cart_transforms, axis=1)
        return transforms, transform_dists, np.array(facet_areas), facet_vertices

    @cached_property
    def point_neighbor_transforms(self) -> (NDArray[int], NDArray[float]):
        """

        Returns
        -------
        (NDArray[int], NDArray[float])
            A tuple where the first entry is a 26x3 array of transformations in
            from any point to its neighbors and the second is the
            distance to each of these neighbors in cartesian space.

        """
        neighbors = np.array(
            [i for i in itertools.product([-1, 0, 1], repeat=3) if i != (0, 0, 0)]
        ).astype(np.int64)
        cart_coords = self.grid_to_cart(neighbors)
        dists = np.linalg.norm(cart_coords, axis=1)

        return neighbors, dists

    @cached_property
    def point_neighbor_face_tranforms(self) -> (NDArray[int], NDArray[float]):
        """

        Returns
        -------
        (NDArray[int], NDArray[float])
            A tuple where the first entry is a 6x3 array of transformations in
            voxel space from any voxel to its face sharing neighbors and the
            second is the distance to each of these neighbors in cartesian space.

        """
        all_neighbors, all_dists = self.point_neighbor_transforms
        faces = []
        dists = []
        for i in range(len(all_neighbors)):
            if np.sum(np.abs(all_neighbors[i])) == 1:
                faces.append(all_neighbors[i])
                dists.append(all_dists[i])
        return np.array(faces).astype(int), np.array(dists)

    @property
    def grid_neighbor_transforms(self) -> list:
        """
        The transforms for translating a grid index to neighboring unit
        cells. This is necessary for the many voxels that will not be directly
        within an atoms partitioning.

        Returns
        -------
        list
            A list of voxel grid_neighbor_transforms unique to the grid dimensions.

        """
        if self._grid_neighbor_transforms is None:
            a, b, c = self.shape
            grid_neighbor_transforms = [
                (t, u, v)
                for t, u, v in itertools.product([-a, 0, a], [-b, 0, b], [-c, 0, c])
            ]
            # sort grid_neighbor_transforms. There may be a better way of sorting them. I
            # noticed that generally the correct site was found most commonly
            # for the original site and generally was found at grid_neighbor_transforms that
            # were either all negative/0 or positive/0
            grid_neighbor_transforms_sorted = []
            for item in grid_neighbor_transforms:
                if all(val <= 0 for val in item):
                    grid_neighbor_transforms_sorted.append(item)
                elif all(val >= 0 for val in item):
                    grid_neighbor_transforms_sorted.append(item)
            for item in grid_neighbor_transforms:
                if item not in grid_neighbor_transforms_sorted:
                    grid_neighbor_transforms_sorted.append(item)
            grid_neighbor_transforms_sorted.insert(
                0, grid_neighbor_transforms_sorted.pop(7)
            )
            self._grid_neighbor_transforms = grid_neighbor_transforms_sorted
        return self._grid_neighbor_transforms

    @property
    def grid_resolution(self) -> float:
        """

        Returns
        -------
        float
            The number of voxels per unit volume.

        """
        volume = self.structure.volume
        number_of_voxels = self.ngridpts
        return number_of_voxels / volume

    @property
    def symmetry_data(self):
        """

        Returns
        -------
        TYPE
            The pymatgen symmetry dataset for the Grid's Structure object

        """
        if self._symmetry_data is None:
            self._symmetry_data = SpacegroupAnalyzer(
                self.structure
            ).get_symmetry_dataset()
        return self._symmetry_data

    @property
    def equivalent_atoms(self) -> NDArray[int]:
        """

        Returns
        -------
        NDArray[int]
            The equivalent atoms in the Structure.

        """
        return self.symmetry_data.equivalent_atoms

    @property
    def maxima_mask(self) -> NDArray[bool]:
        """

        Returns
        -------
        NDArray[bool]
            A mask with the same dimensions as the data that is True at local
            maxima. Adjacent points with the same value will both be labeled as
            True.
        """
        if self._maxima_mask is None:
            # avoid circular import
            from baderkit.core.methods.shared_numba import get_maxima

            self._maxima_mask = get_maxima(
                self.total,
                neighbor_transforms=self.point_neighbor_transforms[0],
                vacuum_mask=np.zeros_like(self.total, dtype=np.bool_),
            )
        return self._maxima_mask

    @property
    def minima_mask(self) -> NDArray[bool]:
        """

        Returns
        -------
        NDArray[bool]
            A mask with the same dimensions as the data that is True at local
            minima. Adjacent points with the same value will both be labeled as
            True.
        """
        if self._minima_mask is None:
            # avoid circular import
            from baderkit.core.methods.shared_numba import get_maxima

            self._minima_mask = get_maxima(
                self.total,
                neighbor_transforms=self.point_neighbor_transforms[0],
                vacuum_mask=np.zeros_like(self.total, dtype=np.bool_),
                use_minima=True,
            )
        return self._minima_mask

    def value_at(
        self,
        x: float,
        y: float,
        z: float,
    ):
        """Get a data value from self.data at a given point (x, y, z) in terms
        of fractional lattice parameters. Will be interpolated using a
        cubic spline on self.data if (x, y, z) is not in the original
        set of data points.

        Parameters
        ----------
        x : float
            Fraction of lattice vector a.
        y: float
            Fraction of lattice vector b.
        z: float
            Fraction of lattice vector c.

        Returns
        -------
        float
            Value from self.data (potentially interpolated) corresponding to
            the point (x, y, z).
        """
        # interpolate value
        return self.interpolator([x, y, z])[0]

    def values_at(
        self,
        frac_coords: NDArray[float],
    ) -> list[float]:
        """
        Interpolates the value of the data at each fractional coordinate in a
        given list or array.

        Parameters
        ----------
        frac_coords : NDArray
            The fractional coordinates to interpolate values at with shape
            N, 3.

        Returns
        -------
        list[float]
            The interpolated value at each fractional coordinate.

        """
        # interpolate values
        return self.interpolator(frac_coords)

    def linear_slice(self, p1: NDArray[float], p2: NDArray[float], n: int = 100):
        """
        Interpolates the data between two fractional coordinates.

        Parameters
        ----------
        p1 : NDArray[float]
            The fractional coordinates of the first point
        p2 : NDArray[float]
            The fractional coordinates of the second point
        n : int, optional
            The number of points to collect along the line

        Returns:
            List of n data points (mostly interpolated) representing a linear slice of the
            data from point p1 to point p2.
        """
        if type(p1) not in {list, np.ndarray}:
            raise TypeError(
                f"type of p1 should be list or np.ndarray, got {type(p1).__name__}"
            )
        if len(p1) != 3:
            raise ValueError(f"length of p1 should be 3, got {len(p1)}")
        if type(p2) not in {list, np.ndarray}:
            raise TypeError(
                f"type of p2 should be list or np.ndarray, got {type(p2).__name__}"
            )
        if len(p2) != 3:
            raise ValueError(f"length of p2 should be 3, got {len(p2)}")

        x_pts = np.linspace(p1[0], p2[0], num=n)
        y_pts = np.linspace(p1[1], p2[1], num=n)
        z_pts = np.linspace(p1[2], p2[2], num=n)
        frac_coords = np.column_stack((x_pts, y_pts, z_pts))
        return self.values_at(frac_coords)

    def get_box_around_point(self, point: NDArray, neighbor_size: int = 1) -> NDArray:
        """
        Gets a box around a given point taking into account wrapping at cell
        boundaries.

        Parameters
        ----------
        point : NDArray
            The indices of the point to get a box around.
        neighbor_size : int, optional
            The size of the box on either side of the point. The default is 1.

        Returns
        -------
        NDArray
            A slice of the grid taken around the provided point.

        """

        slices = []
        for dim, c in zip(self.shape, point):
            idx = np.arange(c - neighbor_size, c + 2) % dim
            idx = idx.astype(int)
            slices.append(idx)
        return self.total[np.ix_(slices[0], slices[1], slices[2])]

    def climb_to_max(self, frac_coords: NDArray) -> NDArray[float]:
        """
        Hill climbs to a maximum from the provided fractional coordinate.

        Parameters
        ----------
        frac_coords : NDArray
            The starting coordinate for hill climbing.

        Returns
        -------
        NDArray[float]
            The final fractional coordinates after hill climbing.
        float
            The data value at the found maximum

        """
        # Convert to voxel coords and round
        coords = np.round(self.frac_to_grid(frac_coords)).astype(int)
        # wrap around edges of cell
        coords %= self.shape
        i, j, k = coords

        # import numba function to avoid circular import
        from baderkit.core.toolkit.grid_numba import climb_to_max

        # get neighbors and dists
        neighbor_transforms, neighbor_dists = self.point_neighbor_transforms
        # get max
        mi, mj, mk = climb_to_max(
            self.total, i, j, k, neighbor_transforms, neighbor_dists
        )
        # get value at max
        max_val = self.total[mi, mj, mk]
        # Now we check if this point borders other points with the same value
        box = self.get_box_around_point((mi, mj, mk))
        all_max = np.argwhere(box == max_val)
        avg_pos = all_max.mean(axis=1)
        local_offset = avg_pos - 1  # shift from subset center
        current_coords = np.array((mi, mj, mk)) + local_offset
        current_coords %= self.shape

        new_frac_coords = self.grid_to_frac(current_coords)
        x, y, z = new_frac_coords

        return new_frac_coords, self.value_at(x, y, z)

    @staticmethod
    def get_2x_supercell(data: NDArray | None = None) -> NDArray:
        """
        Duplicates data to make a 2x2x2 supercell

        Parameters
        ----------
        data : NDArray | None, optional
            The data to duplicate. The default is None.

        Returns
        -------
        NDArray
            A new array with the data doubled in each direction
        """
        new_data = np.tile(data, (2, 2, 2))
        return new_data

    def get_points_in_radius(
        self,
        point: NDArray,
        radius: float,
    ) -> NDArray[int]:
        """
        Gets the indices of the points in a radius around a point

        Parameters
        ----------
        radius : float
            The radius in cartesian distance units to find indices around the
            point.
        point : NDArray
            The indices of the point to perform the operation on.

        Returns
        -------
        NDArray[int]
            The point indices in the sphere around the provided point.

        """
        point = np.array(point)
        # Get the distance from each point to the origin
        point_distances = self.point_dists

        # Get the indices that are within the radius
        sphere_indices = np.where(point_distances <= radius)
        sphere_indices = np.column_stack(sphere_indices)

        # Get indices relative to the point
        sphere_indices = sphere_indices + point
        # adjust points to wrap around grid
        # line = [[round(float(a % b), 12) for a, b in zip(position, grid_data.shape)]]
        new_x = (sphere_indices[:, 0] % self.shape[0]).astype(int)
        new_y = (sphere_indices[:, 1] % self.shape[1]).astype(int)
        new_z = (sphere_indices[:, 2] % self.shape[2]).astype(int)
        sphere_indices = np.column_stack([new_x, new_y, new_z])
        # return new_x, new_y, new_z
        return sphere_indices

    def get_transformation_in_radius(self, radius: float) -> NDArray[int]:
        """
        Gets the transformations required to move from a point to the points
        surrounding it within the provided radius

        Parameters
        ----------
        radius : float
            The radius in cartesian distance units around the voxel.

        Returns
        -------
        NDArray[int]
            An array of transformations to add to a point to get to each of the
            points within the radius surrounding it.

        """
        # Get voxels around origin
        voxel_distances = self.point_dists
        # sphere_grid = np.where(voxel_distances <= radius, True, False)
        # eroded_grid = binary_erosion(sphere_grid)
        # shell_indices = np.where(sphere_grid!=eroded_grid)
        shell_indices = np.where(voxel_distances <= radius)
        # Now we want to translate these indices to next to the corner so that
        # we can use them as transformations to move a voxel to the edge
        final_shell_indices = []
        for a, x in zip(self.shape, shell_indices):
            new_x = x - a
            abs_new_x = np.abs(new_x)
            new_x_filter = abs_new_x < x
            final_x = np.where(new_x_filter, new_x, x)
            final_shell_indices.append(final_x)

        return np.column_stack(final_shell_indices)

    # def get_padded_grid_axes(
    #     self, padding: int = 0
    # ) -> tuple[NDArray, NDArray, NDArray]:
    #     """
    #     Gets the the possible indices for each dimension of a padded grid.
    #     e.g. if the original charge density grid is 20x20x20, and is padded
    #     with one extra layer on each side, this function will return three
    #     arrays with integers from 0 to 21.

    #     Parameters
    #     ----------
    #     padding : int, optional
    #         The amount the grid has been padded. The default is 0.

    #     Returns
    #     -------
    #     tuple[NDArray, NDArray, NDArray]
    #         Three arrays with lengths the same as the grids shape.

    #     """

    #     grid = self.total
    #     a = np.linspace(
    #         0,
    #         grid.shape[0] + (padding - 1) * 2 + 1,
    #         grid.shape[0] + padding * 2,
    #     )
    #     b = np.linspace(
    #         0,
    #         grid.shape[1] + (padding - 1) * 2 + 1,
    #         grid.shape[1] + padding * 2,
    #     )
    #     c = np.linspace(
    #         0,
    #         grid.shape[2] + (padding - 1) * 2 + 1,
    #         grid.shape[2] + padding * 2,
    #     )
    #     return a, b, c

    def copy(self) -> Self:
        """
        Convenience method to get a copy of the current Grid.

        Returns
        -------
        Self
            A copy of the Grid.

        """
        return Grid(
            structure=self.structure.copy(),
            data=self.data.copy(),
            data_aug=self.data_aug.copy(),
            source_format=self.source_format,
            data_type=self.data_type,
            distance_matrix=self._distance_matrix.copy(),
        )

    def get_atoms_in_volume(self, volume_mask: NDArray[bool]) -> NDArray[int]:
        """
        Checks if an atom is within the provided volume. This only checks the
        point write where the atom is located, so a shell around the atom will
        not be caught

        Parameters
        ----------
        volume_mask : NDArray[bool]
            A mask of the same shape as the current grid.

        Returns
        -------
        NDArray[int]
            A list of atoms in the provided mask.

        """
        # Make sure the shape of the mask is the same as the grid
        assert np.all(
            np.equal(self.shape, volume_mask.shape)
        ), "Mask and Grid must be the same shape"
        # Get the voxel coordinates for each atom
        site_voxel_coords = self.frac_to_grid(self.structure.frac_coords).astype(int)
        # Return the indices of the atoms that are in the mask
        atoms_in_volume = volume_mask[
            site_voxel_coords[:, 0], site_voxel_coords[:, 1], site_voxel_coords[:, 2]
        ]
        return np.argwhere(atoms_in_volume)

    def get_atoms_surrounded_by_volume(
        self, volume_mask: NDArray[bool], return_type: bool = False
    ) -> NDArray[int]:
        """
        Checks if a mask completely surrounds any of the atoms
        in the structure. This method uses scipy's ndimage package to
        label features in the grid combined with a supercell to check
        if atoms identical through translation are connected.

        Parameters
        ----------
        volume_mask : NDArray[bool]
            A mask of the same shape as the current grid.
        return_type : bool, optional
            Whether or not to return the type of surrounding. 0 indicates that
            the atom sits exactly in the volume. 1 indicates that it is surrounded
            but not directly in it. The default is False.

        Returns
        -------
        NDArray[int]
            The atoms that are surrounded by this mask.

        """
        # Make sure the shape of the mask is the same as the grid
        assert np.all(
            np.equal(self.shape, volume_mask.shape)
        ), "Mask and Grid must be the same shape"
        # first we get any atoms that are within the mask itself. These won't be
        # found otherwise because they will always sit in unlabeled regions.
        structure = np.ones([3, 3, 3])
        dilated_mask = binary_dilation(volume_mask, structure)
        init_atoms = self.get_atoms_in_volume(dilated_mask)
        # check if we've surrounded all of our atoms. If so, we can return and
        # skip the rest
        if len(init_atoms) == len(self.structure):
            return init_atoms, np.zeros(len(init_atoms))
        # Now we create a supercell of the mask so we can check connections to
        # neighboring cells. This will be used to check if the feature connects
        # to itself in each direction
        dilated_supercell_mask = self.get_2x_supercell(dilated_mask)
        # We also get an inversion of this mask. This will be used to check if
        # the mask surrounds each atom. To do this, we use the dilated supercell
        # We do this to avoid thin walls being considered connections
        # in the inverted mask
        inverted_mask = dilated_supercell_mask == False
        # Now we use use scipy to label unique features in our masks

        inverted_feature_supercell = self.label(inverted_mask, structure)

        # if an atom was fully surrounded, it should sit inside one of our labels.
        # The same atom in an adjacent unit cell should have a different label.
        # To check this, we need to look at the atom in each section of the supercell
        # and see if it has a different label in each.
        # Similarly, if the feature is disconnected from itself in each unit cell
        # any voxel in the feature should have different labels in each section.
        # If not, the feature is connected to itself in multiple directions and
        # must surround many atoms.
        transformations = np.array(list(itertools.product([0, 1], repeat=3)))
        transformations = self.frac_to_grid(transformations)
        # Check each atom to determine how many atoms it surrounds
        surrounded_sites = []
        for i, site in enumerate(self.structure):
            # Get the voxel coords of each atom in their equivalent spots in each
            # quadrant of the supercell
            frac_coords = site.frac_coords
            voxel_coords = self.frac_to_grid(frac_coords)
            transformed_coords = (transformations + voxel_coords).astype(int)
            # Get the feature label at each transformation. If the atom is not surrounded
            # by this basin, at least some of these feature labels will be the same
            features = inverted_feature_supercell[
                transformed_coords[:, 0],
                transformed_coords[:, 1],
                transformed_coords[:, 2],
            ]
            if len(np.unique(features)) == 8:
                # The atom is completely surrounded by this basin and the basin belongs
                # to this atom
                surrounded_sites.append(i)
        surrounded_sites.extend(init_atoms)
        surrounded_sites = np.unique(surrounded_sites)
        types = []
        for site in surrounded_sites:
            if site in init_atoms:
                types.append(0)
            else:
                types.append(1)
        if return_type:
            return surrounded_sites, types
        return surrounded_sites

    def check_if_infinite_feature(self, volume_mask: NDArray[bool]) -> bool:
        """
        Checks if a mask extends infinitely in at least one direction.
        This method uses scipy's ndimage package to label features in the mask
        combined with a supercell to check if the label matches between unit cells.

        Parameters
        ----------
        volume_mask : NDArray[bool]
            A mask of the same shape as the current grid.

        Returns
        -------
        bool
            Whether or not this is an infinite feature.

        """
        # First we check that there is at least one feature in the mask. If not
        # we return False as there is no feature.
        if (~volume_mask).all():
            return False

        structure = np.ones([3, 3, 3])
        # Now we create a supercell of the mask so we can check connections to
        # neighboring cells. This will be used to check if the feature connects
        # to itself in each direction
        supercell_mask = self.get_2x_supercell(volume_mask)
        # Now we use use scipy to label unique features in our masks
        feature_supercell = self.label(supercell_mask, structure)
        # Now we check if we have the same label in any of the adjacent unit
        # cells. If yes we have an infinite feature.
        transformations = np.array(list(itertools.product([0, 1], repeat=3)))
        transformations = self.frac_to_grid(transformations)
        initial_coord = np.argwhere(volume_mask)[0]
        transformed_coords = (transformations + initial_coord).astype(int)

        # Get the feature label at each transformation. If the atom is not surrounded
        # by this basin, at least some of these feature labels will be the same
        features = feature_supercell[
            transformed_coords[:, 0], transformed_coords[:, 1], transformed_coords[:, 2]
        ]

        inf_feature = False
        # If any of the transformed coords have the same feature value, this
        # feature extends between unit cells in at least 1 direction and is
        # infinite. This corresponds to the list of unique features being below
        # 8
        if len(np.unique(features)) < 8:
            inf_feature = True

        return inf_feature

    def regrid(
        self,
        desired_resolution: int = 1200,
        new_shape: np.array = None,
        order: int = 3,
    ) -> Self:
        """
        Returns a new grid resized using scipy's ndimage.zoom method

        Parameters
        ----------
        desired_resolution : int, optional
            The desired resolution in voxels/A^3. The default is 1200.
        new_shape : np.array, optional
            The new array shape. Takes precedence over desired_resolution. The default is None.
        order : int, optional
            The order of spline interpolation to use. The default is 3.

        Returns
        -------
        Self
            A new Grid object near the desired resolution.
        """

        # get the original grid size and lattice volume.
        shape = self.shape
        volume = self.structure.volume

        if new_shape is None:
            # calculate how much the number of voxels along each unit cell must be
            # multiplied to reach the desired resolution.
            scale_factor = ((desired_resolution * volume) / shape.prod()) ** (1 / 3)

            # calculate the new grid shape. round up to the nearest integer for each
            # side
            new_shape = np.around(shape * scale_factor).astype(np.int32)

        # get the factor to zoom by
        zoom_factor = new_shape / shape

        # zoom each piece of data
        new_data = {}
        for key, data in self.data.items():
            new_data[key] = zoom(
                data, zoom_factor, order=order, mode="grid-wrap", grid_mode=True
            )

        # TODO: Add augment data?
        return Grid(structure=self.structure, data=new_data)

    def split_to_spin(self) -> tuple[Self, Self]:
        """
        Splits the grid to two Grid objects representing the spin up and spin down contributions

        Returns
        -------
        tuple[Self, Self]
            The spin-up and spin-down Grid objects.

        """

        # first check if the grid has spin parts
        assert (
            self.is_spin_polarized
        ), "Only one set of data detected. The grid cannot be split into spin up and spin down"
        assert not self.is_soc

        # Now we get the separate data parts. If the data is ELF, the parts are
        # stored as total=spin up and diff = spin down
        if self.data_type == "elf":
            logging.info(
                "Splitting Grid using ELFCAR conventions (spin-up in 'total', spin-down in 'diff')"
            )
            spin_up_data = self.total.copy()
            spin_down_data = self.diff.copy()
        elif self.data_type == "charge":
            logging.info(
                "Splitting Grid using CHGCAR conventions (spin-up + spin-down in 'total', spin-up - spin-down in 'diff')"
            )
            spin_data = self.spin_data
            # pymatgen uses some custom class as keys here
            for key in spin_data.keys():
                if key.value == 1:
                    spin_up_data = spin_data[key].copy()
                elif key.value == -1:
                    spin_down_data = spin_data[key].copy()

        # convert to dicts
        spin_up_data = {"total": spin_up_data}
        spin_down_data = {"total": spin_down_data}

        # get augment data
        aug_up_data = (
            {"total": self.data_aug["total"]} if "total" in self.data_aug else {}
        )
        aug_down_data = (
            {"total": self.data_aug["diff"]} if "diff" in self.data_aug else {}
        )

        spin_up_grid = Grid(
            structure=self.structure.copy(),
            data=spin_up_data,
            data_aug=aug_up_data,
            data_type=self.data_type,
            source_format=self.source_format,
        )
        spin_down_grid = Grid(
            structure=self.structure.copy(),
            data=spin_down_data,
            data_aug=aug_down_data,
            data_type=self.data_type,
            source_format=self.source_format,
        )

        return spin_up_grid, spin_down_grid

    @staticmethod
    def label(input: NDArray, structure: NDArray = np.ones([3, 3, 3])) -> NDArray[int]:
        """
        Uses scipy's ndimage package to label an array, and corrects for
        periodic boundaries

        Parameters
        ----------
        input : NDArray
            The array to label.
        structure : NDArray, optional
            The structureing elemetn defining feature connections.
            The default is np.ones([3, 3, 3]).

        Returns
        -------
        NDArray[int]
            An array of the same shape as the original with labels for each unique
            feature.

        """

        if structure is not None:
            labeled_array, _ = label(input, structure)
            if len(np.unique(labeled_array)) == 1:
                # there is one feature or no features
                return labeled_array
            # Features connected through opposite sides of the unit cell should
            # have the same label, but they don't currently. To handle this, we
            # pad our featured grid, re-label it, and check if the new labels
            # contain multiple of our previous labels.
            padded_featured_grid = np.pad(labeled_array, 1, "wrap")
            relabeled_array, label_num = label(padded_featured_grid, structure)
        else:
            labeled_array, _ = label(input)
            padded_featured_grid = np.pad(labeled_array, 1, "wrap")
            relabeled_array, label_num = label(padded_featured_grid)

        # We want to keep track of which features are connected to each other
        unique_connections = [[] for i in range(len(np.unique(labeled_array)))]

        for i in np.unique(relabeled_array):
            # for i in range(label_num):
            # Get the list of features that are in this super feature
            mask = relabeled_array == i
            connected_features = list(np.unique(padded_featured_grid[mask]))
            # Iterate over these features. If they exist in a connection that we
            # already have, we want to extend the connection to include any other
            # features in this super feature
            for j in connected_features:

                unique_connections[j].extend([k for k in connected_features if k != j])

                unique_connections[j] = list(np.unique(unique_connections[j]))

        # create set/list to keep track of which features have already been connected
        # to others and the full list of connections
        already_connected = set()
        reduced_connections = []

        # loop over each shared connection
        for i in range(len(unique_connections)):
            if i in already_connected:
                # we've already done these connections, so we skip
                continue
            # create sets of connections to compare with as we add more
            connections = set()
            new_connections = set(unique_connections[i])
            while connections != new_connections:
                # loop over the connections we've found so far. As we go, add
                # any features we encounter to our set.
                connections = new_connections.copy()
                for j in connections:
                    already_connected.add(j)
                    new_connections.update(unique_connections[j])

            # If we found any connections, append them to our list of reduced connections
            if connections:
                reduced_connections.append(sorted(new_connections))

        # For each set of connections in our reduced set, relabel all values to
        # the lowest one.
        for connections in reduced_connections:
            connected_features = np.unique(connections)
            lowest_idx = connected_features[0]
            for higher_idx in connected_features[1:]:
                labeled_array = np.where(
                    labeled_array == higher_idx, lowest_idx, labeled_array
                )

        # Now we reduce the feature labels so that they start at 0
        for i, j in enumerate(np.unique(labeled_array)):
            labeled_array = np.where(labeled_array == j, i, labeled_array)

        return labeled_array

    def linear_add(self, other: Self, scale_factor=1.0) -> Self:
        """
        Method to do a linear sum of volumetric objects. Used by + and -
        operators as well. Returns a VolumetricData object containing the
        linear sum.

        Parameters
        ----------
        other : Grid
            Another Grid object
        scale_factor : float
            Factor to scale the other data by

        Returns
        -------
            Grid corresponding to self + scale_factor * other.
        """
        if self.structure != other.structure:
            logging.warn(
                "Structures are different. Make sure you know what you are doing...",
                stacklevel=2,
            )
        if list(self.data) != list(other.data):
            raise ValueError(
                "Data have different keys! Maybe one is spin-polarized and the other is not?"
            )

        # To add checks
        data = {}
        for k in self.data:
            data[k] = self.data[k] + scale_factor * other.data[k]

        new = deepcopy(self)
        new.data = data.copy()
        new.data_aug = {}  # TODO: Can this be added somehow?
        return new

    # @staticmethod
    # def periodic_center_of_mass(
    #     labels: NDArray[int], label_vals: NDArray[int] = None
    # ) -> NDArray:
    #     """
    #     Computes center of mass for each label in a 3D periodic array.

    #     Parameters
    #     ----------
    #     labels : NDArray[int]
    #         3D array of integer labels.
    #     label_vals : NDArray[int], optional
    #         list/array of unique labels to compute. None will return all.

    #     Returns
    #     -------
    #     NDArray
    #         A 3xN array of centers of mass in voxel index coordinates.
    #     """

    #     shape = labels.shape
    #     if label_vals is None:
    #         label_vals = np.unique(labels)
    #         label_vals = label_vals[label_vals != 0]

    #     centers = []
    #     for val in label_vals:
    #         # get the voxel coords for each voxel in this label
    #         coords = np.array(np.where(labels == val)).T  # shape (N, 3)
    #         # If we have no coords for this label, we skip
    #         if coords.shape[0] == 0:
    #             continue

    #         # From chap-gpt: Get center of mass using spherical distance
    #         center = []
    #         for i, size in enumerate(shape):
    #             angles = coords[:, i] * 2 * np.pi / size
    #             x = np.cos(angles).mean()
    #             y = np.sin(angles).mean()
    #             mean_angle = np.arctan2(y, x)
    #             mean_pos = (mean_angle % (2 * np.pi)) * size / (2 * np.pi)
    #             center.append(mean_pos)
    #         centers.append(center)
    #     centers = np.array(centers)
    #     centers = centers.round(6)

    #     return centers

    # The following method finds critical points using the gradient. However, this
    # assumes an orthogonal unit cell and should be improved.
    # @staticmethod
    # def get_critical_points(
    #     array: NDArray, threshold: float = 5e-03, return_hessian_s: bool = True
    # ) -> tuple[NDArray, NDArray, NDArray]:
    #     """
    #     Finds the critical points in the grid. If return_hessians is true,
    #     the hessian matrices for each critical point will be returned along
    #     with their type index.
    #     NOTE: This method is VERY dependent on grid resolution and the provided
    #     threshold.

    #     Parameters
    #     ----------
    #     array : NDArray
    #         The array to find critical points in.
    #     threshold : float, optional
    #         The threshold below which the hessian will be considered 0.
    #         The default is 5e-03.
    #     return_hessian_s : bool, optional
    #         Whether or not to return the hessian signs. The default is True.

    #     Returns
    #     -------
    #     tuple[NDArray, NDArray, NDArray]
    #         The critical points and values.

    #     """

    #     # get gradient using a padded grid to handle periodicity
    #     padding = 2
    #     # a = np.linspace(
    #     #     0,
    #     #     array.shape[0] + (padding - 1) * 2 + 1,
    #     #     array.shape[0] + padding * 2,
    #     # )
    #     # b = np.linspace(
    #     #     0,
    #     #     array.shape[1] + (padding - 1) * 2 + 1,
    #     #     array.shape[1] + padding * 2,
    #     # )
    #     # c = np.linspace(
    #     #     0,
    #     #     array.shape[2] + (padding - 1) * 2 + 1,
    #     #     array.shape[2] + padding * 2,
    #     # )
    #     padded_array = np.pad(array, padding, mode="wrap")
    #     dx, dy, dz = np.gradient(padded_array)

    #     # get magnitude of the gradient
    #     magnitude = np.sqrt(dx**2 + dy**2 + dz**2)

    #     # unpad the magnitude
    #     slicer = tuple(slice(padding, -padding) for _ in range(3))
    #     magnitude = magnitude[slicer]

    #     # now we want to get where the magnitude is close to 0. To do this, we
    #     # will create a mask where the magnitude is below a threshold. We will
    #     # then label the regions where this is true using scipy, then combine
    #     # the regions into one
    #     magnitude_mask = magnitude < threshold
    #     # critical_points = np.where(magnitude<threshold)
    #     # padded_critical_points = np.array(critical_points).T + padding

    #     label_structure = np.ones((3, 3, 3), dtype=int)
    #     labeled_magnitude_mask = Grid.label(magnitude_mask, label_structure)
    #     min_indices = []
    #     for idx in np.unique(labeled_magnitude_mask):
    #         label_mask = labeled_magnitude_mask == idx
    #         label_indices = np.where(label_mask)
    #         min_mag = magnitude[label_indices].min()
    #         min_indices.append(np.argwhere((magnitude == min_mag) & label_mask)[0])
    #     min_indices = np.array(min_indices)

    #     critical_points = min_indices[:, 0], min_indices[:, 1], min_indices[:, 2]

    #     # critical_points = self.periodic_center_of_mass(labeled_magnitude_mask)
    #     padded_critical_points = tuple([i + padding for i in critical_points])
    #     values = array[critical_points]
    #     # # get the value at each of these critical points
    #     # fn_values = RegularGridInterpolator((a, b, c), padded_array , method="linear")
    #     # values = fn_values(padded_critical_points)

    #     if not return_hessian_s:
    #         return critical_points, values

    #     # now we want to get the hessian eigenvalues around each of these points
    #     # using interpolation. First, we get the second derivatives
    #     d2f_dx2 = np.gradient(dx, axis=0)
    #     d2f_dy2 = np.gradient(dy, axis=1)
    #     d2f_dz2 = np.gradient(dz, axis=2)
    #     # # now create interpolation functions for each
    #     # fn_dx2 = RegularGridInterpolator((a, b, c), d2f_dx2, method="linear")
    #     # fn_dy2 = RegularGridInterpolator((a, b, c), d2f_dy2, method="linear")
    #     # fn_dz2 = RegularGridInterpolator((a, b, c), d2f_dz2, method="linear")
    #     # and calculate the hessian eigenvalues for each point
    #     # H00 = fn_dx2(padded_critical_points)
    #     # H11 = fn_dy2(padded_critical_points)
    #     # H22 = fn_dz2(padded_critical_points)
    #     H00 = d2f_dx2[padded_critical_points]
    #     H11 = d2f_dy2[padded_critical_points]
    #     H22 = d2f_dz2[padded_critical_points]
    #     # summarize the hessian eigenvalues by getting the sum of their signs
    #     hessian_eigs = np.array([H00, H11, H22])
    #     hessian_eigs = np.moveaxis(hessian_eigs, 1, 0)
    #     hessian_eigs_signs = np.where(hessian_eigs > 0, 1, hessian_eigs)
    #     hessian_eigs_signs = np.where(hessian_eigs < 0, -1, hessian_eigs_signs)
    #     # Now we get the sum of signs for each set of hessian eigenvalues
    #     s = np.sum(hessian_eigs_signs, axis=1)

    #     return critical_points, values, s

    ###########################################################################
    # The following is a series of methods that are useful for converting between
    # voxel coordinates, fractional coordinates, and cartesian coordinates.
    # Voxel coordinates go from 0 to grid_size-1. Fractional coordinates go
    # from 0 to 1. Cartesian coordinates convert to real space based on the
    # crystal lattice.
    ###########################################################################
    def get_voxel_coords_from_index(self, site: int) -> NDArray[int]:
        """
        Takes in an atom's site index and returns the equivalent voxel grid index.

        Parameters
        ----------
        site : int
            The index of the site to find the grid index for.

        Returns
        -------
        NDArray[int]
            A voxel grid index.

        """
        return self.frac_to_grid(self.structure[site].frac_coords)

    def get_voxel_coords_from_neigh_CrystalNN(self, neigh) -> NDArray[int]:
        """
        Gets the voxel grid index from a neighbor atom object from CrystalNN or
        VoronoiNN

        Parameters
        ----------
        neigh :
            A neighbor type object from pymatgen.

        Returns
        -------
        NDArray[int]
            A voxel grid index as an array.

        """
        grid_size = self.shape
        frac = neigh["site"].frac_coords
        voxel_coords = [a * b for a, b in zip(grid_size, frac)]
        # voxel positions go from 1 to (grid_size + 0.9999)
        return np.array(voxel_coords)

    def get_voxel_coords_from_neigh(self, neigh: dict) -> NDArray[int]:
        """
        Gets the voxel grid index from a neighbor atom object from the pymatgen
        structure.get_neighbors class.

        Parameters
        ----------
        neigh : dict
            A neighbor dictionary from pymatgens structure.get_neighbors
            method.

        Returns
        -------
        NDArray[int]
            A voxel grid index as an array.

        """

        grid_size = self.shape
        frac_coords = neigh.frac_coords
        voxel_coords = [a * b for a, b in zip(grid_size, frac_coords)]
        # voxel positions go from 1 to (grid_size + 0.9999)
        return np.array(voxel_coords)

    def cart_to_frac(self, cart_coords: NDArray | list) -> NDArray[float]:
        """
        Takes in a cartesian coordinate and returns the fractional coordinates.

        Parameters
        ----------
        cart_coords : NDArray | list
            An Nx3 Array or 1D array of length 3.

        Returns
        -------
        NDArray[float]
            Fractional coordinates as an Nx3 Array.

        """
        inverse_matrix = np.linalg.inv(self.matrix)

        return cart_coords @ inverse_matrix

    def cart_to_grid(self, cart_coords: NDArray | list) -> NDArray[int]:
        """
        Takes in a cartesian coordinate and returns the voxel coordinates.

        Parameters
        ----------
        cart_coords : NDArray | list
            An Nx3 Array or 1D array of length 3.

        Returns
        -------
        NDArray[int]
            Voxel coordinates as an Nx3 Array.

        """
        frac_coords = self.cart_to_frac(cart_coords)
        voxel_coords = self.frac_to_grid(frac_coords)
        return voxel_coords

    def frac_to_cart(self, frac_coords: NDArray) -> NDArray[float]:
        """
        Takes in a fractional coordinate and returns the cartesian coordinates.

        Parameters
        ----------
        frac_coords : NDArray | list
            An Nx3 Array or 1D array of length 3.

        Returns
        -------
        NDArray[float]
            Cartesian coordinates as an Nx3 Array.

        """

        return frac_coords @ self.matrix

    def grid_to_frac(self, vox_coords: NDArray) -> NDArray[float]:
        """
        Takes in a voxel coordinates and returns the fractional coordinates.

        Parameters
        ----------
        vox_coords : NDArray | list
            An Nx3 Array or 1D array of length 3.

        Returns
        -------
        NDArray[float]
            Fractional coordinates as an Nx3 Array.

        """

        return vox_coords / self.shape

    def frac_to_grid(self, frac_coords: NDArray) -> NDArray[int]:
        """
        Takes in a fractional coordinates and returns the voxel coordinates.

        Parameters
        ----------
        frac_coords : NDArray | list
            An Nx3 Array or 1D array of length 3.

        Returns
        -------
        NDArray[int]
            Voxel coordinates as an Nx3 Array.

        """
        return frac_coords * self.shape

    def grid_to_cart(self, vox_coords: NDArray) -> NDArray[float]:
        """
        Takes in a voxel coordinates and returns the cartesian coordinates.

        Parameters
        ----------
        vox_coords : NDArray | list
            An Nx3 Array or 1D array of length 3.

        Returns
        -------
        NDArray[float]
            Cartesian coordinates as an Nx3 Array.

        """
        frac_coords = self.grid_to_frac(vox_coords)
        return self.frac_to_cart(frac_coords)

    ###########################################################################
    # Functions for loading from files or strings
    ###########################################################################
    # @staticmethod
    # def _guess_file_format(
    #     filename: str,
    #     data: NDArray[np.float64],
    # ):
    #     # guess from filename
    #     data_type = None
    #     if "elf" in filename.lower():
    #         data_type = DataType.elf
    #     elif any(i in filename.lower() for i in ["chg", "charge"]):
    #         data_type = DataType.charge
    #     if data_type is not None:
    #         logging.info(f"Data type set as {data_type.value} from file name")
    #     return data_type

    @classmethod
    def from_vasp(
        cls,
        grid_file: str | Path,
        data_type: str | DataType = None,
        total_only: bool = True,
        **kwargs,
    ) -> Self:
        """
        Create a grid instance using a CHGCAR or ELFCAR file.

        Parameters
        ----------
        grid_file : str | Path
            The file the instance should be made from. Should be a VASP
            CHGCAR or ELFCAR type file.
        data_type: str | DataType
            The type of data loaded from the file, either charge or elf. If
            None, the type will be guessed from the data range.
            Defaults to None.
        total_only: bool
            If true, only the first set of data in the file will be read. This
            increases speed and reduced memory usage for methods that do not
            use the spin data.
            Defaults to True.

        Returns
        -------
        Self
            Grid from the specified file.

        """
        logging.info(f"Loading {grid_file}")
        t0 = time.time()
        # get structure and data from file
        grid_file = Path(grid_file)
        structure, data, data_aug = read_vasp(grid_file, total_only=total_only)
        t1 = time.time()
        logging.info(f"Time: {round(t1-t0,2)}")
        return cls(
            structure=structure,
            data=data,
            data_aug=data_aug,
            data_type=data_type,
            source_format=Format.vasp,
            **kwargs,
        )

    @classmethod
    def from_cube(
        cls,
        grid_file: str | Path,
        data_type: str | DataType = None,
        **kwargs,
    ) -> Self:
        """
        Create a grid instance using a gaussian cube file.

        Parameters
        ----------
        grid_file : str | Path
            The file the instance should be made from. Should be a gaussian
            cube file.
        data_type: str | DataType
            The type of data loaded from the file, either charge or elf. If
            None, the type will be guessed from the data range.
            Defaults to None.

        Returns
        -------
        Self
            Grid from the specified file.

        """
        logging.info(f"Loading {grid_file}")
        t0 = time.time()
        # make sure path is a Path object
        grid_file = Path(grid_file)
        structure, data, ion_charges, origin = read_cube(grid_file)
        # TODO: Also save the ion charges/origin for writing later
        t1 = time.time()
        logging.info(f"Time: {round(t1-t0,2)}")
        return cls(
            structure=structure,
            data=data,
            data_type=data_type,
            source_format=Format.cube,
            **kwargs,
        )

    @classmethod
    def from_vasp_pymatgen(
        cls,
        grid_file: str | Path,
        data_type: str | DataType = None,
        **kwargs,
    ) -> Self:
        """
        Create a grid instance using a CHGCAR or ELFCAR file. Uses pymatgen's
        parse_file method which is often surprisingly slow.

        Parameters
        ----------
        grid_file : str | Path
            The file the instance should be made from. Should be a VASP
            CHGCAR or ELFCAR type file.
        data_type: str | DataType
            The type of data loaded from the file, either charge or elf. If
            None, the type will be guessed from the data range.
            Defaults to None.

        Returns
        -------
        Self
            Grid from the specified file.

        """
        logging.info(f"Loading {grid_file}")
        t0 = time.time()
        # make sure path is a Path object
        grid_file = Path(grid_file)
        # Create string to add structure to.
        poscar, data, data_aug = cls.parse_file(grid_file)
        t1 = time.time()
        logging.info(f"Time: {round(t1-t0,2)}")
        return cls(
            structure=poscar.structure,
            data=data,
            data_aug=data_aug,
            source_format=Format.vasp,
            data_type=data_type,
            **kwargs,
        )

    @classmethod
    def from_hdf5(
        cls,
        grid_file: str | Path,
        data_type: str | DataType = None,
        **kwargs,
    ) -> Self:
        """
        Create a grid instance using an hdf5 file.

        Parameters
        ----------
        grid_file : str | Path
            The file the instance should be made from. Should be a binary hdf5
            file.
        data_type: str | DataType
            The type of data loaded from the file, either charge or elf. If
            None, the type will be guessed from the data range.
            Defaults to None.

        Returns
        -------
        Self
            Grid from the specified file.

        """
        try:
            import h5py
        except:
            raise ImportError(
                """
                The `h5py` package is required to read/write to the hdf5 format.
                Please install with `conda install h5py` or `pip install h5py`.
                """
            )

        logging.info(f"Loading {grid_file}")
        t0 = time.time()
        # make sure path is a Path object
        grid_file = Path(grid_file)
        # load the file
        pymatgen_grid = super().from_hdf5(filename=grid_file)
        t1 = time.time()
        logging.info(f"Time: {round(t1-t0,2)}")
        return cls(
            structure=pymatgen_grid.structure,
            data=pymatgen_grid.data,
            data_aug=pymatgen_grid.data_aug,
            source_format=Format.hdf5,
            data_type=data_type,
            **kwargs,
        )

    @classmethod
    def from_dynamic(
        cls,
        grid_file: str | Path,
        format: str | Format = None,
        **kwargs,
    ) -> Self:
        """
        Create a grid instance using a VASP or .cube file. If no format is provided
        the format is guesed by the name of the file.

        Parameters
        ----------
        grid_file : str | Path
            The file the instance should be made from.
        format : Format, optional
            The format of the provided file. If None, a guess will be made based
            on the name of the file. Setting this is identical to calling the
            from methods for the corresponding file type. The default is None.

        Returns
        -------
        Self
            Grid from the specified file.

        """
        grid_file = Path(grid_file)
        if format is None:
            # guess format from file
            format = detect_format(grid_file)

        # make sure format is an available option
        assert (
            format in Format
        ), "Provided format '{format}'. Options are: {[i.value for i in Format]}"

        # get the reading method corresponding to this output format
        method_name = format.reader

        # load from file
        return getattr(cls, method_name)(grid_file, **kwargs)

    def write_vasp(
        self,
        filename: Path | str,
        vasp4_compatible: bool = False,
    ):
        """
        Writes the Grid to a VASP-like file at the provided path.

        Parameters
        ----------
        filename : Path | str
            The name of the file to write to.

        Returns
        -------
        None.

        """
        filename = Path(filename)
        logging.info(f"Writing {filename.name}")
        write_vasp_file(filename=filename, grid=self, vasp4_compatible=vasp4_compatible)

    def write_cube(
        self,
        filename: Path | str,
        **kwargs,
    ):
        """
        Writes the Grid to a Gaussian cube-like file at the provided path.

        Parameters
        ----------
        filename : Path | str
            The name of the file to write to.

        Returns
        -------
        None.

        """
        filename = Path(filename)
        logging.info(f"Writing {filename.name}")
        write_cube_file(
            filename=filename,
            grid=self,
            **kwargs,
        )

    def to_hdf5(
        self,
        filename: Path | str,
        **kwargs,
    ):
        try:
            import h5py
        except:
            raise ImportError(
                """
                The `h5py` package is required to read/write to the hdf5 format.
                Please install with `conda install h5py` or `pip install h5py`.
                """
            )
        filename = Path(filename)
        logging.info(f"Writing {filename.name}")
        super().to_hdf5(filename)

    def write(
        self,
        filename: Path | str,
        output_format: Format | str = None,
        **kwargs,
    ):
        """
        Writes the Grid to the requested format file at the provided path. If no
        format is provided, uses this Grid objects stored format.

        Parameters
        ----------
        filename : Path | str
            The name of the file to write to.
        output_format : Format | str
            The format to write with. If None, writes to source format stored in
            this Grid objects metadata.
            Defaults to None.

        Returns
        -------
        None.

        """
        # If no provided format, get from metadata
        if output_format is None:
            output_format = self.source_format
        # Make sure format is a Format object not a string
        output_format = Format(output_format)
        # get the writing method corresponding to this output format
        method_name = output_format.writer
        # write the grid
        getattr(self, method_name)(filename, **kwargs)

diff property writable

Returns:

Type Description
NDArray[float]

For charge densities, returns the magnetized charge (spin-up - spin-down). For ELF returns the spin-down ELF. If the file was not from a spin polarized calculation, this will be None.

equivalent_atoms property

Returns:

Type Description
NDArray[int]

The equivalent atoms in the Structure.

flat_grid_indices property

Returns:

Type Description
NDArray[int]

An array of the same shape as the grid where each entry is the index of that voxel if you were to flatten/ravel the grid. Uses 'C' ordering.

grid_indices property

Returns:

Type Description
NDArray[int]

The indices for all points on the grid. Uses 'C' ordering.

grid_neighbor_transforms property

The transforms for translating a grid index to neighboring unit cells. This is necessary for the many voxels that will not be directly within an atoms partitioning.

Returns:

Type Description
list

A list of voxel grid_neighbor_transforms unique to the grid dimensions.

grid_resolution property

Returns:

Type Description
float

The number of voxels per unit volume.

matrix property

Returns:

Type Description
NDArray[float]

A 3x3 matrix defining the a, b, and c sides of the unit cell. Each row is the corresponding lattice vector in cartesian space.

max_point_dist property

Returns:

Type Description
float

The maximum distance from the center of a point to one of its corners. This assumes the voxel is the same shape as the lattice.

maxima_mask property

Returns:

Type Description
NDArray[bool]

A mask with the same dimensions as the data that is True at local maxima. Adjacent points with the same value will both be labeled as True.

minima_mask property

Returns:

Type Description
NDArray[bool]

A mask with the same dimensions as the data that is True at local minima. Adjacent points with the same value will both be labeled as True.

point_dists property

Returns:

Type Description
NDArray[float]

The distance from each point to the origin in cartesian coordinates.

point_neighbor_face_tranforms cached property

Returns:

Type Description
(NDArray[int], NDArray[float])

A tuple where the first entry is a 6x3 array of transformations in voxel space from any voxel to its face sharing neighbors and the second is the distance to each of these neighbors in cartesian space.

point_neighbor_transforms cached property

Returns:

Type Description
(NDArray[int], NDArray[float])

A tuple where the first entry is a 26x3 array of transformations in from any point to its neighbors and the second is the distance to each of these neighbors in cartesian space.

point_neighbor_voronoi_transforms cached property

Returns:

Type Description
tuple[NDArray, NDArray, NDArray, NDArray]

The transformations, neighbor distances, areas, and vertices of the voronoi surface between any point and its neighbors in the grid. This is used in the 'weight' method for Bader analysis.

point_volume property

Returns:

Type Description
float

The volume of a single point in the grid.

shape property

Returns:

Type Description
NDArray[int]

The number of points along each axis of the grid.

symmetry_data property

Returns:

Type Description
TYPE

The pymatgen symmetry dataset for the Grid's Structure object

total property writable

Returns:

Type Description
NDArray[float]

For charge densities, returns the total charge (spin-up + spin-down). For ELF returns the spin-up or single spin ELF.

cart_to_frac(cart_coords)

Takes in a cartesian coordinate and returns the fractional coordinates.

Parameters:

Name Type Description Default
cart_coords NDArray | list

An Nx3 Array or 1D array of length 3.

required

Returns:

Type Description
NDArray[float]

Fractional coordinates as an Nx3 Array.

Source code in src/baderkit/core/toolkit/grid.py
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
def cart_to_frac(self, cart_coords: NDArray | list) -> NDArray[float]:
    """
    Takes in a cartesian coordinate and returns the fractional coordinates.

    Parameters
    ----------
    cart_coords : NDArray | list
        An Nx3 Array or 1D array of length 3.

    Returns
    -------
    NDArray[float]
        Fractional coordinates as an Nx3 Array.

    """
    inverse_matrix = np.linalg.inv(self.matrix)

    return cart_coords @ inverse_matrix

cart_to_grid(cart_coords)

Takes in a cartesian coordinate and returns the voxel coordinates.

Parameters:

Name Type Description Default
cart_coords NDArray | list

An Nx3 Array or 1D array of length 3.

required

Returns:

Type Description
NDArray[int]

Voxel coordinates as an Nx3 Array.

Source code in src/baderkit/core/toolkit/grid.py
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
def cart_to_grid(self, cart_coords: NDArray | list) -> NDArray[int]:
    """
    Takes in a cartesian coordinate and returns the voxel coordinates.

    Parameters
    ----------
    cart_coords : NDArray | list
        An Nx3 Array or 1D array of length 3.

    Returns
    -------
    NDArray[int]
        Voxel coordinates as an Nx3 Array.

    """
    frac_coords = self.cart_to_frac(cart_coords)
    voxel_coords = self.frac_to_grid(frac_coords)
    return voxel_coords

check_if_infinite_feature(volume_mask)

Checks if a mask extends infinitely in at least one direction. This method uses scipy's ndimage package to label features in the mask combined with a supercell to check if the label matches between unit cells.

Parameters:

Name Type Description Default
volume_mask NDArray[bool]

A mask of the same shape as the current grid.

required

Returns:

Type Description
bool

Whether or not this is an infinite feature.

Source code in src/baderkit/core/toolkit/grid.py
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
def check_if_infinite_feature(self, volume_mask: NDArray[bool]) -> bool:
    """
    Checks if a mask extends infinitely in at least one direction.
    This method uses scipy's ndimage package to label features in the mask
    combined with a supercell to check if the label matches between unit cells.

    Parameters
    ----------
    volume_mask : NDArray[bool]
        A mask of the same shape as the current grid.

    Returns
    -------
    bool
        Whether or not this is an infinite feature.

    """
    # First we check that there is at least one feature in the mask. If not
    # we return False as there is no feature.
    if (~volume_mask).all():
        return False

    structure = np.ones([3, 3, 3])
    # Now we create a supercell of the mask so we can check connections to
    # neighboring cells. This will be used to check if the feature connects
    # to itself in each direction
    supercell_mask = self.get_2x_supercell(volume_mask)
    # Now we use use scipy to label unique features in our masks
    feature_supercell = self.label(supercell_mask, structure)
    # Now we check if we have the same label in any of the adjacent unit
    # cells. If yes we have an infinite feature.
    transformations = np.array(list(itertools.product([0, 1], repeat=3)))
    transformations = self.frac_to_grid(transformations)
    initial_coord = np.argwhere(volume_mask)[0]
    transformed_coords = (transformations + initial_coord).astype(int)

    # Get the feature label at each transformation. If the atom is not surrounded
    # by this basin, at least some of these feature labels will be the same
    features = feature_supercell[
        transformed_coords[:, 0], transformed_coords[:, 1], transformed_coords[:, 2]
    ]

    inf_feature = False
    # If any of the transformed coords have the same feature value, this
    # feature extends between unit cells in at least 1 direction and is
    # infinite. This corresponds to the list of unique features being below
    # 8
    if len(np.unique(features)) < 8:
        inf_feature = True

    return inf_feature

climb_to_max(frac_coords)

Hill climbs to a maximum from the provided fractional coordinate.

Parameters:

Name Type Description Default
frac_coords NDArray

The starting coordinate for hill climbing.

required

Returns:

Type Description
NDArray[float]

The final fractional coordinates after hill climbing.

float

The data value at the found maximum

Source code in src/baderkit/core/toolkit/grid.py
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
def climb_to_max(self, frac_coords: NDArray) -> NDArray[float]:
    """
    Hill climbs to a maximum from the provided fractional coordinate.

    Parameters
    ----------
    frac_coords : NDArray
        The starting coordinate for hill climbing.

    Returns
    -------
    NDArray[float]
        The final fractional coordinates after hill climbing.
    float
        The data value at the found maximum

    """
    # Convert to voxel coords and round
    coords = np.round(self.frac_to_grid(frac_coords)).astype(int)
    # wrap around edges of cell
    coords %= self.shape
    i, j, k = coords

    # import numba function to avoid circular import
    from baderkit.core.toolkit.grid_numba import climb_to_max

    # get neighbors and dists
    neighbor_transforms, neighbor_dists = self.point_neighbor_transforms
    # get max
    mi, mj, mk = climb_to_max(
        self.total, i, j, k, neighbor_transforms, neighbor_dists
    )
    # get value at max
    max_val = self.total[mi, mj, mk]
    # Now we check if this point borders other points with the same value
    box = self.get_box_around_point((mi, mj, mk))
    all_max = np.argwhere(box == max_val)
    avg_pos = all_max.mean(axis=1)
    local_offset = avg_pos - 1  # shift from subset center
    current_coords = np.array((mi, mj, mk)) + local_offset
    current_coords %= self.shape

    new_frac_coords = self.grid_to_frac(current_coords)
    x, y, z = new_frac_coords

    return new_frac_coords, self.value_at(x, y, z)

copy()

Convenience method to get a copy of the current Grid.

Returns:

Type Description
Self

A copy of the Grid.

Source code in src/baderkit/core/toolkit/grid.py
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
def copy(self) -> Self:
    """
    Convenience method to get a copy of the current Grid.

    Returns
    -------
    Self
        A copy of the Grid.

    """
    return Grid(
        structure=self.structure.copy(),
        data=self.data.copy(),
        data_aug=self.data_aug.copy(),
        source_format=self.source_format,
        data_type=self.data_type,
        distance_matrix=self._distance_matrix.copy(),
    )

frac_to_cart(frac_coords)

Takes in a fractional coordinate and returns the cartesian coordinates.

Parameters:

Name Type Description Default
frac_coords NDArray | list

An Nx3 Array or 1D array of length 3.

required

Returns:

Type Description
NDArray[float]

Cartesian coordinates as an Nx3 Array.

Source code in src/baderkit/core/toolkit/grid.py
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
def frac_to_cart(self, frac_coords: NDArray) -> NDArray[float]:
    """
    Takes in a fractional coordinate and returns the cartesian coordinates.

    Parameters
    ----------
    frac_coords : NDArray | list
        An Nx3 Array or 1D array of length 3.

    Returns
    -------
    NDArray[float]
        Cartesian coordinates as an Nx3 Array.

    """

    return frac_coords @ self.matrix

frac_to_grid(frac_coords)

Takes in a fractional coordinates and returns the voxel coordinates.

Parameters:

Name Type Description Default
frac_coords NDArray | list

An Nx3 Array or 1D array of length 3.

required

Returns:

Type Description
NDArray[int]

Voxel coordinates as an Nx3 Array.

Source code in src/baderkit/core/toolkit/grid.py
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
def frac_to_grid(self, frac_coords: NDArray) -> NDArray[int]:
    """
    Takes in a fractional coordinates and returns the voxel coordinates.

    Parameters
    ----------
    frac_coords : NDArray | list
        An Nx3 Array or 1D array of length 3.

    Returns
    -------
    NDArray[int]
        Voxel coordinates as an Nx3 Array.

    """
    return frac_coords * self.shape

from_cube(grid_file, data_type=None, **kwargs) classmethod

Create a grid instance using a gaussian cube file.

Parameters:

Name Type Description Default
grid_file str | Path

The file the instance should be made from. Should be a gaussian cube file.

required
data_type str | DataType

The type of data loaded from the file, either charge or elf. If None, the type will be guessed from the data range. Defaults to None.

None

Returns:

Type Description
Self

Grid from the specified file.

Source code in src/baderkit/core/toolkit/grid.py
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
@classmethod
def from_cube(
    cls,
    grid_file: str | Path,
    data_type: str | DataType = None,
    **kwargs,
) -> Self:
    """
    Create a grid instance using a gaussian cube file.

    Parameters
    ----------
    grid_file : str | Path
        The file the instance should be made from. Should be a gaussian
        cube file.
    data_type: str | DataType
        The type of data loaded from the file, either charge or elf. If
        None, the type will be guessed from the data range.
        Defaults to None.

    Returns
    -------
    Self
        Grid from the specified file.

    """
    logging.info(f"Loading {grid_file}")
    t0 = time.time()
    # make sure path is a Path object
    grid_file = Path(grid_file)
    structure, data, ion_charges, origin = read_cube(grid_file)
    # TODO: Also save the ion charges/origin for writing later
    t1 = time.time()
    logging.info(f"Time: {round(t1-t0,2)}")
    return cls(
        structure=structure,
        data=data,
        data_type=data_type,
        source_format=Format.cube,
        **kwargs,
    )

from_dynamic(grid_file, format=None, **kwargs) classmethod

Create a grid instance using a VASP or .cube file. If no format is provided the format is guesed by the name of the file.

Parameters:

Name Type Description Default
grid_file str | Path

The file the instance should be made from.

required
format Format

The format of the provided file. If None, a guess will be made based on the name of the file. Setting this is identical to calling the from methods for the corresponding file type. The default is None.

None

Returns:

Type Description
Self

Grid from the specified file.

Source code in src/baderkit/core/toolkit/grid.py
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
@classmethod
def from_dynamic(
    cls,
    grid_file: str | Path,
    format: str | Format = None,
    **kwargs,
) -> Self:
    """
    Create a grid instance using a VASP or .cube file. If no format is provided
    the format is guesed by the name of the file.

    Parameters
    ----------
    grid_file : str | Path
        The file the instance should be made from.
    format : Format, optional
        The format of the provided file. If None, a guess will be made based
        on the name of the file. Setting this is identical to calling the
        from methods for the corresponding file type. The default is None.

    Returns
    -------
    Self
        Grid from the specified file.

    """
    grid_file = Path(grid_file)
    if format is None:
        # guess format from file
        format = detect_format(grid_file)

    # make sure format is an available option
    assert (
        format in Format
    ), "Provided format '{format}'. Options are: {[i.value for i in Format]}"

    # get the reading method corresponding to this output format
    method_name = format.reader

    # load from file
    return getattr(cls, method_name)(grid_file, **kwargs)

from_hdf5(grid_file, data_type=None, **kwargs) classmethod

Create a grid instance using an hdf5 file.

Parameters:

Name Type Description Default
grid_file str | Path

The file the instance should be made from. Should be a binary hdf5 file.

required
data_type str | DataType

The type of data loaded from the file, either charge or elf. If None, the type will be guessed from the data range. Defaults to None.

None

Returns:

Type Description
Self

Grid from the specified file.

Source code in src/baderkit/core/toolkit/grid.py
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
@classmethod
def from_hdf5(
    cls,
    grid_file: str | Path,
    data_type: str | DataType = None,
    **kwargs,
) -> Self:
    """
    Create a grid instance using an hdf5 file.

    Parameters
    ----------
    grid_file : str | Path
        The file the instance should be made from. Should be a binary hdf5
        file.
    data_type: str | DataType
        The type of data loaded from the file, either charge or elf. If
        None, the type will be guessed from the data range.
        Defaults to None.

    Returns
    -------
    Self
        Grid from the specified file.

    """
    try:
        import h5py
    except:
        raise ImportError(
            """
            The `h5py` package is required to read/write to the hdf5 format.
            Please install with `conda install h5py` or `pip install h5py`.
            """
        )

    logging.info(f"Loading {grid_file}")
    t0 = time.time()
    # make sure path is a Path object
    grid_file = Path(grid_file)
    # load the file
    pymatgen_grid = super().from_hdf5(filename=grid_file)
    t1 = time.time()
    logging.info(f"Time: {round(t1-t0,2)}")
    return cls(
        structure=pymatgen_grid.structure,
        data=pymatgen_grid.data,
        data_aug=pymatgen_grid.data_aug,
        source_format=Format.hdf5,
        data_type=data_type,
        **kwargs,
    )

from_vasp(grid_file, data_type=None, total_only=True, **kwargs) classmethod

Create a grid instance using a CHGCAR or ELFCAR file.

Parameters:

Name Type Description Default
grid_file str | Path

The file the instance should be made from. Should be a VASP CHGCAR or ELFCAR type file.

required
data_type str | DataType

The type of data loaded from the file, either charge or elf. If None, the type will be guessed from the data range. Defaults to None.

None
total_only bool

If true, only the first set of data in the file will be read. This increases speed and reduced memory usage for methods that do not use the spin data. Defaults to True.

True

Returns:

Type Description
Self

Grid from the specified file.

Source code in src/baderkit/core/toolkit/grid.py
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
@classmethod
def from_vasp(
    cls,
    grid_file: str | Path,
    data_type: str | DataType = None,
    total_only: bool = True,
    **kwargs,
) -> Self:
    """
    Create a grid instance using a CHGCAR or ELFCAR file.

    Parameters
    ----------
    grid_file : str | Path
        The file the instance should be made from. Should be a VASP
        CHGCAR or ELFCAR type file.
    data_type: str | DataType
        The type of data loaded from the file, either charge or elf. If
        None, the type will be guessed from the data range.
        Defaults to None.
    total_only: bool
        If true, only the first set of data in the file will be read. This
        increases speed and reduced memory usage for methods that do not
        use the spin data.
        Defaults to True.

    Returns
    -------
    Self
        Grid from the specified file.

    """
    logging.info(f"Loading {grid_file}")
    t0 = time.time()
    # get structure and data from file
    grid_file = Path(grid_file)
    structure, data, data_aug = read_vasp(grid_file, total_only=total_only)
    t1 = time.time()
    logging.info(f"Time: {round(t1-t0,2)}")
    return cls(
        structure=structure,
        data=data,
        data_aug=data_aug,
        data_type=data_type,
        source_format=Format.vasp,
        **kwargs,
    )

from_vasp_pymatgen(grid_file, data_type=None, **kwargs) classmethod

Create a grid instance using a CHGCAR or ELFCAR file. Uses pymatgen's parse_file method which is often surprisingly slow.

Parameters:

Name Type Description Default
grid_file str | Path

The file the instance should be made from. Should be a VASP CHGCAR or ELFCAR type file.

required
data_type str | DataType

The type of data loaded from the file, either charge or elf. If None, the type will be guessed from the data range. Defaults to None.

None

Returns:

Type Description
Self

Grid from the specified file.

Source code in src/baderkit/core/toolkit/grid.py
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
@classmethod
def from_vasp_pymatgen(
    cls,
    grid_file: str | Path,
    data_type: str | DataType = None,
    **kwargs,
) -> Self:
    """
    Create a grid instance using a CHGCAR or ELFCAR file. Uses pymatgen's
    parse_file method which is often surprisingly slow.

    Parameters
    ----------
    grid_file : str | Path
        The file the instance should be made from. Should be a VASP
        CHGCAR or ELFCAR type file.
    data_type: str | DataType
        The type of data loaded from the file, either charge or elf. If
        None, the type will be guessed from the data range.
        Defaults to None.

    Returns
    -------
    Self
        Grid from the specified file.

    """
    logging.info(f"Loading {grid_file}")
    t0 = time.time()
    # make sure path is a Path object
    grid_file = Path(grid_file)
    # Create string to add structure to.
    poscar, data, data_aug = cls.parse_file(grid_file)
    t1 = time.time()
    logging.info(f"Time: {round(t1-t0,2)}")
    return cls(
        structure=poscar.structure,
        data=data,
        data_aug=data_aug,
        source_format=Format.vasp,
        data_type=data_type,
        **kwargs,
    )

get_2x_supercell(data=None) staticmethod

Duplicates data to make a 2x2x2 supercell

Parameters:

Name Type Description Default
data NDArray | None

The data to duplicate. The default is None.

None

Returns:

Type Description
NDArray

A new array with the data doubled in each direction

Source code in src/baderkit/core/toolkit/grid.py
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
@staticmethod
def get_2x_supercell(data: NDArray | None = None) -> NDArray:
    """
    Duplicates data to make a 2x2x2 supercell

    Parameters
    ----------
    data : NDArray | None, optional
        The data to duplicate. The default is None.

    Returns
    -------
    NDArray
        A new array with the data doubled in each direction
    """
    new_data = np.tile(data, (2, 2, 2))
    return new_data

get_atoms_in_volume(volume_mask)

Checks if an atom is within the provided volume. This only checks the point write where the atom is located, so a shell around the atom will not be caught

Parameters:

Name Type Description Default
volume_mask NDArray[bool]

A mask of the same shape as the current grid.

required

Returns:

Type Description
NDArray[int]

A list of atoms in the provided mask.

Source code in src/baderkit/core/toolkit/grid.py
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
def get_atoms_in_volume(self, volume_mask: NDArray[bool]) -> NDArray[int]:
    """
    Checks if an atom is within the provided volume. This only checks the
    point write where the atom is located, so a shell around the atom will
    not be caught

    Parameters
    ----------
    volume_mask : NDArray[bool]
        A mask of the same shape as the current grid.

    Returns
    -------
    NDArray[int]
        A list of atoms in the provided mask.

    """
    # Make sure the shape of the mask is the same as the grid
    assert np.all(
        np.equal(self.shape, volume_mask.shape)
    ), "Mask and Grid must be the same shape"
    # Get the voxel coordinates for each atom
    site_voxel_coords = self.frac_to_grid(self.structure.frac_coords).astype(int)
    # Return the indices of the atoms that are in the mask
    atoms_in_volume = volume_mask[
        site_voxel_coords[:, 0], site_voxel_coords[:, 1], site_voxel_coords[:, 2]
    ]
    return np.argwhere(atoms_in_volume)

get_atoms_surrounded_by_volume(volume_mask, return_type=False)

Checks if a mask completely surrounds any of the atoms in the structure. This method uses scipy's ndimage package to label features in the grid combined with a supercell to check if atoms identical through translation are connected.

Parameters:

Name Type Description Default
volume_mask NDArray[bool]

A mask of the same shape as the current grid.

required
return_type bool

Whether or not to return the type of surrounding. 0 indicates that the atom sits exactly in the volume. 1 indicates that it is surrounded but not directly in it. The default is False.

False

Returns:

Type Description
NDArray[int]

The atoms that are surrounded by this mask.

Source code in src/baderkit/core/toolkit/grid.py
 927
 928
 929
 930
 931
 932
 933
 934
 935
 936
 937
 938
 939
 940
 941
 942
 943
 944
 945
 946
 947
 948
 949
 950
 951
 952
 953
 954
 955
 956
 957
 958
 959
 960
 961
 962
 963
 964
 965
 966
 967
 968
 969
 970
 971
 972
 973
 974
 975
 976
 977
 978
 979
 980
 981
 982
 983
 984
 985
 986
 987
 988
 989
 990
 991
 992
 993
 994
 995
 996
 997
 998
 999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
def get_atoms_surrounded_by_volume(
    self, volume_mask: NDArray[bool], return_type: bool = False
) -> NDArray[int]:
    """
    Checks if a mask completely surrounds any of the atoms
    in the structure. This method uses scipy's ndimage package to
    label features in the grid combined with a supercell to check
    if atoms identical through translation are connected.

    Parameters
    ----------
    volume_mask : NDArray[bool]
        A mask of the same shape as the current grid.
    return_type : bool, optional
        Whether or not to return the type of surrounding. 0 indicates that
        the atom sits exactly in the volume. 1 indicates that it is surrounded
        but not directly in it. The default is False.

    Returns
    -------
    NDArray[int]
        The atoms that are surrounded by this mask.

    """
    # Make sure the shape of the mask is the same as the grid
    assert np.all(
        np.equal(self.shape, volume_mask.shape)
    ), "Mask and Grid must be the same shape"
    # first we get any atoms that are within the mask itself. These won't be
    # found otherwise because they will always sit in unlabeled regions.
    structure = np.ones([3, 3, 3])
    dilated_mask = binary_dilation(volume_mask, structure)
    init_atoms = self.get_atoms_in_volume(dilated_mask)
    # check if we've surrounded all of our atoms. If so, we can return and
    # skip the rest
    if len(init_atoms) == len(self.structure):
        return init_atoms, np.zeros(len(init_atoms))
    # Now we create a supercell of the mask so we can check connections to
    # neighboring cells. This will be used to check if the feature connects
    # to itself in each direction
    dilated_supercell_mask = self.get_2x_supercell(dilated_mask)
    # We also get an inversion of this mask. This will be used to check if
    # the mask surrounds each atom. To do this, we use the dilated supercell
    # We do this to avoid thin walls being considered connections
    # in the inverted mask
    inverted_mask = dilated_supercell_mask == False
    # Now we use use scipy to label unique features in our masks

    inverted_feature_supercell = self.label(inverted_mask, structure)

    # if an atom was fully surrounded, it should sit inside one of our labels.
    # The same atom in an adjacent unit cell should have a different label.
    # To check this, we need to look at the atom in each section of the supercell
    # and see if it has a different label in each.
    # Similarly, if the feature is disconnected from itself in each unit cell
    # any voxel in the feature should have different labels in each section.
    # If not, the feature is connected to itself in multiple directions and
    # must surround many atoms.
    transformations = np.array(list(itertools.product([0, 1], repeat=3)))
    transformations = self.frac_to_grid(transformations)
    # Check each atom to determine how many atoms it surrounds
    surrounded_sites = []
    for i, site in enumerate(self.structure):
        # Get the voxel coords of each atom in their equivalent spots in each
        # quadrant of the supercell
        frac_coords = site.frac_coords
        voxel_coords = self.frac_to_grid(frac_coords)
        transformed_coords = (transformations + voxel_coords).astype(int)
        # Get the feature label at each transformation. If the atom is not surrounded
        # by this basin, at least some of these feature labels will be the same
        features = inverted_feature_supercell[
            transformed_coords[:, 0],
            transformed_coords[:, 1],
            transformed_coords[:, 2],
        ]
        if len(np.unique(features)) == 8:
            # The atom is completely surrounded by this basin and the basin belongs
            # to this atom
            surrounded_sites.append(i)
    surrounded_sites.extend(init_atoms)
    surrounded_sites = np.unique(surrounded_sites)
    types = []
    for site in surrounded_sites:
        if site in init_atoms:
            types.append(0)
        else:
            types.append(1)
    if return_type:
        return surrounded_sites, types
    return surrounded_sites

get_box_around_point(point, neighbor_size=1)

Gets a box around a given point taking into account wrapping at cell boundaries.

Parameters:

Name Type Description Default
point NDArray

The indices of the point to get a box around.

required
neighbor_size int

The size of the box on either side of the point. The default is 1.

1

Returns:

Type Description
NDArray

A slice of the grid taken around the provided point.

Source code in src/baderkit/core/toolkit/grid.py
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
def get_box_around_point(self, point: NDArray, neighbor_size: int = 1) -> NDArray:
    """
    Gets a box around a given point taking into account wrapping at cell
    boundaries.

    Parameters
    ----------
    point : NDArray
        The indices of the point to get a box around.
    neighbor_size : int, optional
        The size of the box on either side of the point. The default is 1.

    Returns
    -------
    NDArray
        A slice of the grid taken around the provided point.

    """

    slices = []
    for dim, c in zip(self.shape, point):
        idx = np.arange(c - neighbor_size, c + 2) % dim
        idx = idx.astype(int)
        slices.append(idx)
    return self.total[np.ix_(slices[0], slices[1], slices[2])]

get_points_in_radius(point, radius)

Gets the indices of the points in a radius around a point

Parameters:

Name Type Description Default
radius float

The radius in cartesian distance units to find indices around the point.

required
point NDArray

The indices of the point to perform the operation on.

required

Returns:

Type Description
NDArray[int]

The point indices in the sphere around the provided point.

Source code in src/baderkit/core/toolkit/grid.py
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
def get_points_in_radius(
    self,
    point: NDArray,
    radius: float,
) -> NDArray[int]:
    """
    Gets the indices of the points in a radius around a point

    Parameters
    ----------
    radius : float
        The radius in cartesian distance units to find indices around the
        point.
    point : NDArray
        The indices of the point to perform the operation on.

    Returns
    -------
    NDArray[int]
        The point indices in the sphere around the provided point.

    """
    point = np.array(point)
    # Get the distance from each point to the origin
    point_distances = self.point_dists

    # Get the indices that are within the radius
    sphere_indices = np.where(point_distances <= radius)
    sphere_indices = np.column_stack(sphere_indices)

    # Get indices relative to the point
    sphere_indices = sphere_indices + point
    # adjust points to wrap around grid
    # line = [[round(float(a % b), 12) for a, b in zip(position, grid_data.shape)]]
    new_x = (sphere_indices[:, 0] % self.shape[0]).astype(int)
    new_y = (sphere_indices[:, 1] % self.shape[1]).astype(int)
    new_z = (sphere_indices[:, 2] % self.shape[2]).astype(int)
    sphere_indices = np.column_stack([new_x, new_y, new_z])
    # return new_x, new_y, new_z
    return sphere_indices

get_transformation_in_radius(radius)

Gets the transformations required to move from a point to the points surrounding it within the provided radius

Parameters:

Name Type Description Default
radius float

The radius in cartesian distance units around the voxel.

required

Returns:

Type Description
NDArray[int]

An array of transformations to add to a point to get to each of the points within the radius surrounding it.

Source code in src/baderkit/core/toolkit/grid.py
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
def get_transformation_in_radius(self, radius: float) -> NDArray[int]:
    """
    Gets the transformations required to move from a point to the points
    surrounding it within the provided radius

    Parameters
    ----------
    radius : float
        The radius in cartesian distance units around the voxel.

    Returns
    -------
    NDArray[int]
        An array of transformations to add to a point to get to each of the
        points within the radius surrounding it.

    """
    # Get voxels around origin
    voxel_distances = self.point_dists
    # sphere_grid = np.where(voxel_distances <= radius, True, False)
    # eroded_grid = binary_erosion(sphere_grid)
    # shell_indices = np.where(sphere_grid!=eroded_grid)
    shell_indices = np.where(voxel_distances <= radius)
    # Now we want to translate these indices to next to the corner so that
    # we can use them as transformations to move a voxel to the edge
    final_shell_indices = []
    for a, x in zip(self.shape, shell_indices):
        new_x = x - a
        abs_new_x = np.abs(new_x)
        new_x_filter = abs_new_x < x
        final_x = np.where(new_x_filter, new_x, x)
        final_shell_indices.append(final_x)

    return np.column_stack(final_shell_indices)

get_voxel_coords_from_index(site)

Takes in an atom's site index and returns the equivalent voxel grid index.

Parameters:

Name Type Description Default
site int

The index of the site to find the grid index for.

required

Returns:

Type Description
NDArray[int]

A voxel grid index.

Source code in src/baderkit/core/toolkit/grid.py
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
def get_voxel_coords_from_index(self, site: int) -> NDArray[int]:
    """
    Takes in an atom's site index and returns the equivalent voxel grid index.

    Parameters
    ----------
    site : int
        The index of the site to find the grid index for.

    Returns
    -------
    NDArray[int]
        A voxel grid index.

    """
    return self.frac_to_grid(self.structure[site].frac_coords)

get_voxel_coords_from_neigh(neigh)

Gets the voxel grid index from a neighbor atom object from the pymatgen structure.get_neighbors class.

Parameters:

Name Type Description Default
neigh dict

A neighbor dictionary from pymatgens structure.get_neighbors method.

required

Returns:

Type Description
NDArray[int]

A voxel grid index as an array.

Source code in src/baderkit/core/toolkit/grid.py
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
def get_voxel_coords_from_neigh(self, neigh: dict) -> NDArray[int]:
    """
    Gets the voxel grid index from a neighbor atom object from the pymatgen
    structure.get_neighbors class.

    Parameters
    ----------
    neigh : dict
        A neighbor dictionary from pymatgens structure.get_neighbors
        method.

    Returns
    -------
    NDArray[int]
        A voxel grid index as an array.

    """

    grid_size = self.shape
    frac_coords = neigh.frac_coords
    voxel_coords = [a * b for a, b in zip(grid_size, frac_coords)]
    # voxel positions go from 1 to (grid_size + 0.9999)
    return np.array(voxel_coords)

get_voxel_coords_from_neigh_CrystalNN(neigh)

Gets the voxel grid index from a neighbor atom object from CrystalNN or VoronoiNN

Parameters:

Name Type Description Default
neigh

A neighbor type object from pymatgen.

required

Returns:

Type Description
NDArray[int]

A voxel grid index as an array.

Source code in src/baderkit/core/toolkit/grid.py
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
def get_voxel_coords_from_neigh_CrystalNN(self, neigh) -> NDArray[int]:
    """
    Gets the voxel grid index from a neighbor atom object from CrystalNN or
    VoronoiNN

    Parameters
    ----------
    neigh :
        A neighbor type object from pymatgen.

    Returns
    -------
    NDArray[int]
        A voxel grid index as an array.

    """
    grid_size = self.shape
    frac = neigh["site"].frac_coords
    voxel_coords = [a * b for a, b in zip(grid_size, frac)]
    # voxel positions go from 1 to (grid_size + 0.9999)
    return np.array(voxel_coords)

grid_to_cart(vox_coords)

Takes in a voxel coordinates and returns the cartesian coordinates.

Parameters:

Name Type Description Default
vox_coords NDArray | list

An Nx3 Array or 1D array of length 3.

required

Returns:

Type Description
NDArray[float]

Cartesian coordinates as an Nx3 Array.

Source code in src/baderkit/core/toolkit/grid.py
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
def grid_to_cart(self, vox_coords: NDArray) -> NDArray[float]:
    """
    Takes in a voxel coordinates and returns the cartesian coordinates.

    Parameters
    ----------
    vox_coords : NDArray | list
        An Nx3 Array or 1D array of length 3.

    Returns
    -------
    NDArray[float]
        Cartesian coordinates as an Nx3 Array.

    """
    frac_coords = self.grid_to_frac(vox_coords)
    return self.frac_to_cart(frac_coords)

grid_to_frac(vox_coords)

Takes in a voxel coordinates and returns the fractional coordinates.

Parameters:

Name Type Description Default
vox_coords NDArray | list

An Nx3 Array or 1D array of length 3.

required

Returns:

Type Description
NDArray[float]

Fractional coordinates as an Nx3 Array.

Source code in src/baderkit/core/toolkit/grid.py
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
def grid_to_frac(self, vox_coords: NDArray) -> NDArray[float]:
    """
    Takes in a voxel coordinates and returns the fractional coordinates.

    Parameters
    ----------
    vox_coords : NDArray | list
        An Nx3 Array or 1D array of length 3.

    Returns
    -------
    NDArray[float]
        Fractional coordinates as an Nx3 Array.

    """

    return vox_coords / self.shape

label(input, structure=np.ones([3, 3, 3])) staticmethod

Uses scipy's ndimage package to label an array, and corrects for periodic boundaries

Parameters:

Name Type Description Default
input NDArray

The array to label.

required
structure NDArray

The structureing elemetn defining feature connections. The default is np.ones([3, 3, 3]).

ones([3, 3, 3])

Returns:

Type Description
NDArray[int]

An array of the same shape as the original with labels for each unique feature.

Source code in src/baderkit/core/toolkit/grid.py
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
@staticmethod
def label(input: NDArray, structure: NDArray = np.ones([3, 3, 3])) -> NDArray[int]:
    """
    Uses scipy's ndimage package to label an array, and corrects for
    periodic boundaries

    Parameters
    ----------
    input : NDArray
        The array to label.
    structure : NDArray, optional
        The structureing elemetn defining feature connections.
        The default is np.ones([3, 3, 3]).

    Returns
    -------
    NDArray[int]
        An array of the same shape as the original with labels for each unique
        feature.

    """

    if structure is not None:
        labeled_array, _ = label(input, structure)
        if len(np.unique(labeled_array)) == 1:
            # there is one feature or no features
            return labeled_array
        # Features connected through opposite sides of the unit cell should
        # have the same label, but they don't currently. To handle this, we
        # pad our featured grid, re-label it, and check if the new labels
        # contain multiple of our previous labels.
        padded_featured_grid = np.pad(labeled_array, 1, "wrap")
        relabeled_array, label_num = label(padded_featured_grid, structure)
    else:
        labeled_array, _ = label(input)
        padded_featured_grid = np.pad(labeled_array, 1, "wrap")
        relabeled_array, label_num = label(padded_featured_grid)

    # We want to keep track of which features are connected to each other
    unique_connections = [[] for i in range(len(np.unique(labeled_array)))]

    for i in np.unique(relabeled_array):
        # for i in range(label_num):
        # Get the list of features that are in this super feature
        mask = relabeled_array == i
        connected_features = list(np.unique(padded_featured_grid[mask]))
        # Iterate over these features. If they exist in a connection that we
        # already have, we want to extend the connection to include any other
        # features in this super feature
        for j in connected_features:

            unique_connections[j].extend([k for k in connected_features if k != j])

            unique_connections[j] = list(np.unique(unique_connections[j]))

    # create set/list to keep track of which features have already been connected
    # to others and the full list of connections
    already_connected = set()
    reduced_connections = []

    # loop over each shared connection
    for i in range(len(unique_connections)):
        if i in already_connected:
            # we've already done these connections, so we skip
            continue
        # create sets of connections to compare with as we add more
        connections = set()
        new_connections = set(unique_connections[i])
        while connections != new_connections:
            # loop over the connections we've found so far. As we go, add
            # any features we encounter to our set.
            connections = new_connections.copy()
            for j in connections:
                already_connected.add(j)
                new_connections.update(unique_connections[j])

        # If we found any connections, append them to our list of reduced connections
        if connections:
            reduced_connections.append(sorted(new_connections))

    # For each set of connections in our reduced set, relabel all values to
    # the lowest one.
    for connections in reduced_connections:
        connected_features = np.unique(connections)
        lowest_idx = connected_features[0]
        for higher_idx in connected_features[1:]:
            labeled_array = np.where(
                labeled_array == higher_idx, lowest_idx, labeled_array
            )

    # Now we reduce the feature labels so that they start at 0
    for i, j in enumerate(np.unique(labeled_array)):
        labeled_array = np.where(labeled_array == j, i, labeled_array)

    return labeled_array

linear_add(other, scale_factor=1.0)

Method to do a linear sum of volumetric objects. Used by + and - operators as well. Returns a VolumetricData object containing the linear sum.

Parameters:

Name Type Description Default
other Grid

Another Grid object

required
scale_factor float

Factor to scale the other data by

1.0

Returns:

Type Description
Grid corresponding to self + scale_factor * other.
Source code in src/baderkit/core/toolkit/grid.py
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
def linear_add(self, other: Self, scale_factor=1.0) -> Self:
    """
    Method to do a linear sum of volumetric objects. Used by + and -
    operators as well. Returns a VolumetricData object containing the
    linear sum.

    Parameters
    ----------
    other : Grid
        Another Grid object
    scale_factor : float
        Factor to scale the other data by

    Returns
    -------
        Grid corresponding to self + scale_factor * other.
    """
    if self.structure != other.structure:
        logging.warn(
            "Structures are different. Make sure you know what you are doing...",
            stacklevel=2,
        )
    if list(self.data) != list(other.data):
        raise ValueError(
            "Data have different keys! Maybe one is spin-polarized and the other is not?"
        )

    # To add checks
    data = {}
    for k in self.data:
        data[k] = self.data[k] + scale_factor * other.data[k]

    new = deepcopy(self)
    new.data = data.copy()
    new.data_aug = {}  # TODO: Can this be added somehow?
    return new

linear_slice(p1, p2, n=100)

Interpolates the data between two fractional coordinates.

Parameters:

Name Type Description Default
p1 NDArray[float]

The fractional coordinates of the first point

required
p2 NDArray[float]

The fractional coordinates of the second point

required
n int

The number of points to collect along the line

100
Returns

List of n data points (mostly interpolated) representing a linear slice of the data from point p1 to point p2.

required
Source code in src/baderkit/core/toolkit/grid.py
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
def linear_slice(self, p1: NDArray[float], p2: NDArray[float], n: int = 100):
    """
    Interpolates the data between two fractional coordinates.

    Parameters
    ----------
    p1 : NDArray[float]
        The fractional coordinates of the first point
    p2 : NDArray[float]
        The fractional coordinates of the second point
    n : int, optional
        The number of points to collect along the line

    Returns:
        List of n data points (mostly interpolated) representing a linear slice of the
        data from point p1 to point p2.
    """
    if type(p1) not in {list, np.ndarray}:
        raise TypeError(
            f"type of p1 should be list or np.ndarray, got {type(p1).__name__}"
        )
    if len(p1) != 3:
        raise ValueError(f"length of p1 should be 3, got {len(p1)}")
    if type(p2) not in {list, np.ndarray}:
        raise TypeError(
            f"type of p2 should be list or np.ndarray, got {type(p2).__name__}"
        )
    if len(p2) != 3:
        raise ValueError(f"length of p2 should be 3, got {len(p2)}")

    x_pts = np.linspace(p1[0], p2[0], num=n)
    y_pts = np.linspace(p1[1], p2[1], num=n)
    z_pts = np.linspace(p1[2], p2[2], num=n)
    frac_coords = np.column_stack((x_pts, y_pts, z_pts))
    return self.values_at(frac_coords)

regrid(desired_resolution=1200, new_shape=None, order=3)

Returns a new grid resized using scipy's ndimage.zoom method

Parameters:

Name Type Description Default
desired_resolution int

The desired resolution in voxels/A^3. The default is 1200.

1200
new_shape array

The new array shape. Takes precedence over desired_resolution. The default is None.

None
order int

The order of spline interpolation to use. The default is 3.

3

Returns:

Type Description
Self

A new Grid object near the desired resolution.

Source code in src/baderkit/core/toolkit/grid.py
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
def regrid(
    self,
    desired_resolution: int = 1200,
    new_shape: np.array = None,
    order: int = 3,
) -> Self:
    """
    Returns a new grid resized using scipy's ndimage.zoom method

    Parameters
    ----------
    desired_resolution : int, optional
        The desired resolution in voxels/A^3. The default is 1200.
    new_shape : np.array, optional
        The new array shape. Takes precedence over desired_resolution. The default is None.
    order : int, optional
        The order of spline interpolation to use. The default is 3.

    Returns
    -------
    Self
        A new Grid object near the desired resolution.
    """

    # get the original grid size and lattice volume.
    shape = self.shape
    volume = self.structure.volume

    if new_shape is None:
        # calculate how much the number of voxels along each unit cell must be
        # multiplied to reach the desired resolution.
        scale_factor = ((desired_resolution * volume) / shape.prod()) ** (1 / 3)

        # calculate the new grid shape. round up to the nearest integer for each
        # side
        new_shape = np.around(shape * scale_factor).astype(np.int32)

    # get the factor to zoom by
    zoom_factor = new_shape / shape

    # zoom each piece of data
    new_data = {}
    for key, data in self.data.items():
        new_data[key] = zoom(
            data, zoom_factor, order=order, mode="grid-wrap", grid_mode=True
        )

    # TODO: Add augment data?
    return Grid(structure=self.structure, data=new_data)

split_to_spin()

Splits the grid to two Grid objects representing the spin up and spin down contributions

Returns:

Type Description
tuple[Self, Self]

The spin-up and spin-down Grid objects.

Source code in src/baderkit/core/toolkit/grid.py
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
def split_to_spin(self) -> tuple[Self, Self]:
    """
    Splits the grid to two Grid objects representing the spin up and spin down contributions

    Returns
    -------
    tuple[Self, Self]
        The spin-up and spin-down Grid objects.

    """

    # first check if the grid has spin parts
    assert (
        self.is_spin_polarized
    ), "Only one set of data detected. The grid cannot be split into spin up and spin down"
    assert not self.is_soc

    # Now we get the separate data parts. If the data is ELF, the parts are
    # stored as total=spin up and diff = spin down
    if self.data_type == "elf":
        logging.info(
            "Splitting Grid using ELFCAR conventions (spin-up in 'total', spin-down in 'diff')"
        )
        spin_up_data = self.total.copy()
        spin_down_data = self.diff.copy()
    elif self.data_type == "charge":
        logging.info(
            "Splitting Grid using CHGCAR conventions (spin-up + spin-down in 'total', spin-up - spin-down in 'diff')"
        )
        spin_data = self.spin_data
        # pymatgen uses some custom class as keys here
        for key in spin_data.keys():
            if key.value == 1:
                spin_up_data = spin_data[key].copy()
            elif key.value == -1:
                spin_down_data = spin_data[key].copy()

    # convert to dicts
    spin_up_data = {"total": spin_up_data}
    spin_down_data = {"total": spin_down_data}

    # get augment data
    aug_up_data = (
        {"total": self.data_aug["total"]} if "total" in self.data_aug else {}
    )
    aug_down_data = (
        {"total": self.data_aug["diff"]} if "diff" in self.data_aug else {}
    )

    spin_up_grid = Grid(
        structure=self.structure.copy(),
        data=spin_up_data,
        data_aug=aug_up_data,
        data_type=self.data_type,
        source_format=self.source_format,
    )
    spin_down_grid = Grid(
        structure=self.structure.copy(),
        data=spin_down_data,
        data_aug=aug_down_data,
        data_type=self.data_type,
        source_format=self.source_format,
    )

    return spin_up_grid, spin_down_grid

value_at(x, y, z)

Get a data value from self.data at a given point (x, y, z) in terms of fractional lattice parameters. Will be interpolated using a cubic spline on self.data if (x, y, z) is not in the original set of data points.

Parameters:

Name Type Description Default
x float

Fraction of lattice vector a.

required
y float

Fraction of lattice vector b.

required
z float

Fraction of lattice vector c.

required

Returns:

Type Description
float

Value from self.data (potentially interpolated) corresponding to the point (x, y, z).

Source code in src/baderkit/core/toolkit/grid.py
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
def value_at(
    self,
    x: float,
    y: float,
    z: float,
):
    """Get a data value from self.data at a given point (x, y, z) in terms
    of fractional lattice parameters. Will be interpolated using a
    cubic spline on self.data if (x, y, z) is not in the original
    set of data points.

    Parameters
    ----------
    x : float
        Fraction of lattice vector a.
    y: float
        Fraction of lattice vector b.
    z: float
        Fraction of lattice vector c.

    Returns
    -------
    float
        Value from self.data (potentially interpolated) corresponding to
        the point (x, y, z).
    """
    # interpolate value
    return self.interpolator([x, y, z])[0]

values_at(frac_coords)

Interpolates the value of the data at each fractional coordinate in a given list or array.

Parameters:

Name Type Description Default
frac_coords NDArray

The fractional coordinates to interpolate values at with shape N, 3.

required

Returns:

Type Description
list[float]

The interpolated value at each fractional coordinate.

Source code in src/baderkit/core/toolkit/grid.py
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
def values_at(
    self,
    frac_coords: NDArray[float],
) -> list[float]:
    """
    Interpolates the value of the data at each fractional coordinate in a
    given list or array.

    Parameters
    ----------
    frac_coords : NDArray
        The fractional coordinates to interpolate values at with shape
        N, 3.

    Returns
    -------
    list[float]
        The interpolated value at each fractional coordinate.

    """
    # interpolate values
    return self.interpolator(frac_coords)

write(filename, output_format=None, **kwargs)

Writes the Grid to the requested format file at the provided path. If no format is provided, uses this Grid objects stored format.

Parameters:

Name Type Description Default
filename Path | str

The name of the file to write to.

required
output_format Format | str

The format to write with. If None, writes to source format stored in this Grid objects metadata. Defaults to None.

None

Returns:

Type Description
None.
Source code in src/baderkit/core/toolkit/grid.py
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
def write(
    self,
    filename: Path | str,
    output_format: Format | str = None,
    **kwargs,
):
    """
    Writes the Grid to the requested format file at the provided path. If no
    format is provided, uses this Grid objects stored format.

    Parameters
    ----------
    filename : Path | str
        The name of the file to write to.
    output_format : Format | str
        The format to write with. If None, writes to source format stored in
        this Grid objects metadata.
        Defaults to None.

    Returns
    -------
    None.

    """
    # If no provided format, get from metadata
    if output_format is None:
        output_format = self.source_format
    # Make sure format is a Format object not a string
    output_format = Format(output_format)
    # get the writing method corresponding to this output format
    method_name = output_format.writer
    # write the grid
    getattr(self, method_name)(filename, **kwargs)

write_cube(filename, **kwargs)

Writes the Grid to a Gaussian cube-like file at the provided path.

Parameters:

Name Type Description Default
filename Path | str

The name of the file to write to.

required

Returns:

Type Description
None.
Source code in src/baderkit/core/toolkit/grid.py
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
def write_cube(
    self,
    filename: Path | str,
    **kwargs,
):
    """
    Writes the Grid to a Gaussian cube-like file at the provided path.

    Parameters
    ----------
    filename : Path | str
        The name of the file to write to.

    Returns
    -------
    None.

    """
    filename = Path(filename)
    logging.info(f"Writing {filename.name}")
    write_cube_file(
        filename=filename,
        grid=self,
        **kwargs,
    )

write_vasp(filename, vasp4_compatible=False)

Writes the Grid to a VASP-like file at the provided path.

Parameters:

Name Type Description Default
filename Path | str

The name of the file to write to.

required

Returns:

Type Description
None.
Source code in src/baderkit/core/toolkit/grid.py
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
def write_vasp(
    self,
    filename: Path | str,
    vasp4_compatible: bool = False,
):
    """
    Writes the Grid to a VASP-like file at the provided path.

    Parameters
    ----------
    filename : Path | str
        The name of the file to write to.

    Returns
    -------
    None.

    """
    filename = Path(filename)
    logging.info(f"Writing {filename.name}")
    write_vasp_file(filename=filename, grid=self, vasp4_compatible=vasp4_compatible)