Skip to content

Commit

Permalink
Improved compatibility for pandas<1.4
Browse files Browse the repository at this point in the history
  • Loading branch information
marshka committed Apr 17, 2023
1 parent 5a4fe86 commit 4a5d6c4
Show file tree
Hide file tree
Showing 6 changed files with 29 additions and 31 deletions.
17 changes: 9 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
<div align="center">
<br><br>
<img alt="Torch Spatiotemporal" src=./docs/source/_static/img/tsl_logo_text.svg width="85%"/>
<img alt="Torch Spatiotemporal" src="https://raw.githubusercontent.com/TorchSpatiotemporal/tsl/main/docs/source/_static/img/tsl_logo_text.svg" width="85%"/>
<h3>Neural spatiotemporal forecasting with PyTorch</h3>
<hr>
<p>
<img alt="PyPI" src="https://img.shields.io/pypi/v/torch-spatiotemporal">
<img alt="PyPI - Python Version" src="https://img.shields.io/badge/python-%3E%3D3.8-blue">
<!-- img alt="PyPI - Python Version" src="https://img.shields.io/pypi/pyversions/torch-spatiotemporal" -->
<img alt="Total downloads" src="https://static.pepy.tech/badge/torch-spatiotemporal">
<a href='https://torch-spatiotemporal.readthedocs.io/en/latest/?badge=latest'>
<img src='https://readthedocs.org/projects/torch-spatiotemporal/badge/?version=latest' alt='Documentation Status' />
Expand All @@ -16,21 +17,21 @@
</p>
</div>

<p><img src="./docs/source/_static/img/tsl_logo.svg" width="25px" align="center"/> <b>tsl</b> <em>(Torch Spatiotemporal)</em> is a library built to accelerate research on neural spatiotemporal data processing
<p><img src="https://raw.githubusercontent.com/TorchSpatiotemporal/tsl/main/docs/source/_static/img/tsl_logo.svg" width="25px" align="center"/> <b>tsl</b> <em>(Torch Spatiotemporal)</em> is a library built to accelerate research on neural spatiotemporal data processing
methods, with a focus on Graph Neural Networks.</p>

<p><img src="./docs/source/_static/img/tsl_logo.svg" width="25px" align="center"/> tsl is built on several libraries of the <b>Python</b> scientific computing ecosystem, with the final objective of providing a straightforward process that goes from data preprocessing to model prototyping.
In particular, <img src="./docs/source/_static/img/tsl_logo.svg" width="25px" align="center"/> tsl offers a wide range of utilities to develop neural networks in <img src="./docs/source/_static/img/logos/pytorch.svg" width="20px" align="center"/> <a href="https://pytorch.org"><b>PyTorch</b></a> for processing spatiotemporal data signals.</p>
<p><img src="https://raw.githubusercontent.com/TorchSpatiotemporal/tsl/main/docs/source/_static/img/tsl_logo.svg" width="25px" align="center"/> tsl is built on several libraries of the <b>Python</b> scientific computing ecosystem, with the final objective of providing a straightforward process that goes from data preprocessing to model prototyping.
In particular, <img src="https://raw.githubusercontent.com/TorchSpatiotemporal/tsl/main/docs/source/_static/img/tsl_logo.svg" width="25px" align="center"/> tsl offers a wide range of utilities to develop neural networks in <img src="https://raw.githubusercontent.com/TorchSpatiotemporal/tsl/main/docs/source/_static/img/logos/pytorch.svg" width="20px" align="center"/> <a href="https://pytorch.org"><b>PyTorch</b></a> for processing spatiotemporal data signals.</p>

## Getting Started

Before you start using <img src="./docs/source/_static/img/tsl_logo.svg" width="25px" align="center"/> tsl, please review the <a href="https://torch-spatiotemporal.readthedocs.io/en/latest/">documentation</a> to get an understanding of the library and its capabilities.
Before you start using <img src="https://raw.githubusercontent.com/TorchSpatiotemporal/tsl/main/docs/source/_static/img/tsl_logo.svg" width="25px" align="center"/> tsl, please review the <a href="https://torch-spatiotemporal.readthedocs.io/en/latest/">documentation</a> to get an understanding of the library and its capabilities.

You can also explore the examples provided in the `examples` directory to see how train deep learning models working with spatiotemporal data.

## Installation

Before installing <img src="./docs/source/_static/img/tsl_logo.svg" width="25px" align="center"/> tsl, make sure you have installed <img src="./docs/source/_static/img/logos/pytorch.svg" width="20px" align="center"/> <a href="https://pytorch.org">PyTorch</a> (>=1.9.0) and <img src="./docs/source/_static/img/logos/pyg.svg" width="20px" align="center"/> <a href="https://pyg.org">PyG</a> (>=2.0.3) in your virtual environment (see [PyG installation guidelines](https://pytorch-geometric.readthedocs.io/en/latest/install/installation.html)). <img src="./docs/source/_static/img/tsl_logo.svg" width="25px" align="center"/> tsl is available for Python>=3.8. We recommend installation from github to be up-to-date with the latest version:
Before installing <img src="https://raw.githubusercontent.com/TorchSpatiotemporal/tsl/main/docs/source/_static/img/tsl_logo.svg" width="25px" align="center"/> tsl, make sure you have installed <img src="https://raw.githubusercontent.com/TorchSpatiotemporal/tsl/main/docs/source/_static/img/logos/pytorch.svg" width="20px" align="center"/> <a href="https://pytorch.org">PyTorch</a> (>=1.9.0) and <img src="https://raw.githubusercontent.com/TorchSpatiotemporal/tsl/main/docs/source/_static/img/logos/pyg.svg" width="20px" align="center"/> <a href="https://pyg.org">PyG</a> (>=2.0.3) in your virtual environment (see [PyG installation guidelines](https://pytorch-geometric.readthedocs.io/en/latest/install/installation.html)). <img src="https://raw.githubusercontent.com/TorchSpatiotemporal/tsl/main/docs/source/_static/img/tsl_logo.svg" width="25px" align="center"/> tsl is available for Python>=3.8. We recommend installation from github to be up-to-date with the latest version:

```bash
pip install git+https://github.com/TorchSpatiotemporal/tsl.git
Expand All @@ -50,7 +51,7 @@ conda env create -f conda_env.yml

## Tutorial

The best way to start using <img src="./docs/source/_static/img/tsl_logo.svg" width="25px" align="center"/> tsl is by following the tutorial notebook in `examples/notebooks/a_gentle_introduction_to_tsl.ipynb`.
The best way to start using <img src="https://raw.githubusercontent.com/TorchSpatiotemporal/tsl/main/docs/source/_static/img/tsl_logo.svg" width="25px" align="center"/> tsl is by following the tutorial notebook in `examples/notebooks/a_gentle_introduction_to_tsl.ipynb`.

## Documentation

Expand All @@ -73,7 +74,7 @@ If you use Torch Spatiotemporal for your research, please consider citing the li

By [Andrea Cini](https://andreacini.github.io/) and [Ivan Marisca](https://marshka.github.io/).

Thanks to all contributors! Check the [Contributing guidelines](https://github.com/TorchSpatiotemporal/tsl/blob/dev/.github/CONTRIBUTING.md) and help us build a better <img src="./docs/source/_static/img/tsl_logo.svg" width="25px" align="center"/> tsl.
Thanks to all contributors! Check the [Contributing guidelines](https://github.com/TorchSpatiotemporal/tsl/blob/dev/.github/CONTRIBUTING.md) and help us build a better <img src="https://raw.githubusercontent.com/TorchSpatiotemporal/tsl/main/docs/source/_static/img/tsl_logo.svg" width="25px" align="center"/> tsl.

<a href="https://github.com/TorchSpatiotemporal/tsl/graphs/contributors">
<img src="https://contrib.rocks/image?repo=TorchSpatiotemporal/tsl" />
Expand Down
14 changes: 8 additions & 6 deletions examples/imputation/run_imputation_experiment.py
Original file line number Diff line number Diff line change
Expand Up @@ -164,12 +164,14 @@ def run_imputation(cfg: DictConfig):
mode='min',
)

trainer = Trainer(max_epochs=cfg.epochs,
default_root_dir=cfg.run.dir,
logger=exp_logger,
gpus=1 if torch.cuda.is_available() else None,
gradient_clip_val=cfg.grad_clip_val,
callbacks=[early_stop_callback, checkpoint_callback])
trainer = Trainer(
max_epochs=cfg.epochs,
default_root_dir=cfg.run.dir,
logger=exp_logger,
accelerator='gpu' if torch.cuda.is_available() else 'cpu',
devices=1,
gradient_clip_val=cfg.grad_clip_val,
callbacks=[early_stop_callback, checkpoint_callback])

trainer.fit(imputer, datamodule=dm)

Expand Down
7 changes: 2 additions & 5 deletions tsl/datasets/pems_bay.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,11 +77,8 @@ def load_raw(self):
# load traffic data
traffic_path = os.path.join(self.root_dir, 'pems_bay.h5')
df = pd.read_hdf(traffic_path)
# add missing values
datetime_idx = sorted(df.index)
date_range = pd.date_range(datetime_idx[0],
datetime_idx[-1],
freq='5T')
# add missing values (index is sorted)
date_range = pd.date_range(df.index[0], df.index[-1], freq='5T')
df = df.reindex(index=date_range)
# load distance matrix
path = os.path.join(self.root_dir, 'pems_bay_dist.npy')
Expand Down
2 changes: 1 addition & 1 deletion tsl/datasets/prototypes/casting.py
Original file line number Diff line number Diff line change
Expand Up @@ -99,4 +99,4 @@ def time_unit_to_nanoseconds(time_unit: str):
return 365.2425 * 24 * 60 * 60 * 10**9
elif time_unit == 'week':
time_unit = 'W'
return pd.Timedelta('1' + time_unit).delta
return pd.Timedelta('1' + time_unit).value
7 changes: 3 additions & 4 deletions tsl/metrics/torch/metric_base.py
Original file line number Diff line number Diff line change
Expand Up @@ -65,10 +65,9 @@ def __init__(self,

if metric_fn_kwargs is None:
metric_fn_kwargs = dict()
if metric_fn is None:
self.metric_fn = None
else:
self.metric_fn = partial(metric_fn, **metric_fn_kwargs)

self.metric_fn = partial(metric_fn, **metric_fn_kwargs)

self.mask_nans = mask_nans
self.mask_inf = mask_inf
if at is None:
Expand Down
13 changes: 6 additions & 7 deletions tsl/nn/layers/base/embedding.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,19 +63,18 @@ def get_emb(self):

def forward(self,
expand: Optional[List] = None,
token_index: OptTensor = None,
tokens_first: bool = True):
node_index: OptTensor = None,
nodes_first: bool = True):
""""""
emb = self.get_emb()
if token_index is not None:
emb = emb[token_index]
if not tokens_first:
if node_index is not None:
emb = emb[node_index]
if not nodes_first:
emb = emb.T
if expand is None:
return emb
shape = [*emb.size()]
view = [
1 if d > 0 else shape.pop(0 if tokens_first else -1)
for d in expand
1 if d > 0 else shape.pop(0 if nodes_first else -1) for d in expand
]
return emb.view(*view).expand(*expand)

0 comments on commit 4a5d6c4

Please sign in to comment.