Releases: idaholab/Deep-Lynx
OWL Ontology Importer Refactor
This release of DeepLynx is a minor, non-breaking release. This release saw the below bugfixes and/or features.
- OWL Ontology Importer rewritten to handle more use cases and malformed XML files more effectively
- Various stability fixes
Various Bugfixes
Various bugfixes and ability to save GraphQL queries to a json
or csv
file.
Service Users and Various Bugfixes
This release is considered a minor, non-breaking release of DeepLynx. The following features have been added and bugs have been corrected.
- Service Users can now be accessed as "External Applications" from inside DeepLynx's UI. They can be used to assign external applications api keys.
- Various bugfixes
Timeseries Processing & Various Bugfixes
This release is considered a minor, non-breaking release of DeepLynx. The following features have been added and bugs have been corrected.
- Timeseries data processing enabled - more information below and on the wiki https://gitlab.software.inl.gov/b650/Deep-Lynx/-/wikis/Querying-Timeseries-Data
- Ontology versioning enhanced and various versioning bugs corrected
- The server crash that happened on an invalid json payload when uploading data has been corrected
As of this release, DeepLynx now has the capability to store and query timeseries data without having to first store that information on the graph. Prior to this release, time series data was handled a in the following ways, each considered suboptimal:
- Storing each timeseries entry as a node on the graph: Considering the amount of time series data there is per sensor, this quickly overwhelms the graph and had the potential to drastically increase latency when querying or manipulating the graph. Storing as nodes also did not maintain order, order would have to be artificially enforced by choosing a property of the timeseries data to sort on.
- Storing timeseries data as files: In this solution users would typically create a node with metadata about the series of measurements contained in a file or multiple files. They would then upload those files to DeepLynx's blob storage system and attach them to the relevant node. While this eliminated the problem of having an unwieldly number of nodes, hiding the files in blobs meant that DeepLynx lacked the capability to query parts of the data not contained or covered in the metadata stored earlier. This method also necessitated the user downloading the data and using a third-party program to display it.
To improve on the existing solutions, the following feature set was adopted as a target for DeepLynx timeseries data storage and querying capabilities.
- Timeseries data must follow the same data ingestion route as before
- Timeseries data must be mapped to a timeseries specific database table prior to storage
- Users must be able to query timeseries data quickly and without having to first download or leave DeepLynx
- Users must be able to perform simple filtering and ordering on their timeseries data
- Support for managing terabytes of timeseries data
We're happy to note that the 0.3.3 release has met these goals.
Bugfixes
Deployment Streamlining, Processing Thread Fixes, UI Update, and Various Bugfixes
Description
- modified the package.json file to clean up npm commands and clarify the build process
- modified the Dockerfile to correctly build and run DeepLynx in a reproducible manner
- modified the database migration functionality - migrate is no longer a separate step but gets run on each application startup, removed old migrate commands
- added a docker_compose.yml file - this allows us to quickly scaffold and connect a Timescaledb image and DeepLynx image in a reproducible manner. Containers are pulled from the registries, ensuring a stable build each time
- the encryption key is now automatically generated and saved if a user no longer provides their own key, modified the config to handle this automatic generation
- modified the readme to reflect all changes
- corrected tests
- corrected processing thread to avoid swamping the queue
- updated UI, and fixed various bugs
Motivation and Context
Users were struggling to build DeepLynx from source, and the Docker capability was ignored or not well known - as well as not handling the database migration correctly. In order to provide end users with a quick and easy way to get DeepLynx up and running we've added Docker Compose capability and greatly cleaned up the steps for building from source.
We were also running into various issues with the processing thread causing queue buildup. Our changes have made it so processing is quicker and does not swamp the processing queue.
Bug Fixes, Mapping Streamlining
This is a minor, non-breaking release of Deep Lynx. It lays the foundation for our new time series data as well as solving a few bugs and streaming the type mapping process.
Release Notes
- updated packages for both server and UI,
npm audit
ran - modified the gui's client code to return the full error on failed api call instead of just the status
- updated the delete dialogs to set a timer of 5 seconds instead of 10 when warning users
- updated the transformation dialog to flow more easily and be more visually appealing
- added time series specific fields and operations to the transformation dialog - such as table mapping and node attachement parameters
- various changes to the TransformationT type to reflect changes in the backing code
- various translation updates
- ontology versioning disabled by default
- eslint cleanup of the ui
- added timescale container run command to package.json
- added migrations and new fields for transformation object, updated mapper and repository to match
Query Layer Update, Various Security Updates and Bug Fixes
This release of Deep Lynx is considered a minor, breaking release.
The purpose of the Deep Lynx query layer is to query and filter data ingested in previous steps. While the previous version of this query layer allowed for sufficient filtering of nodes based on Metatype properties, this release allows for querying of and filtering on nodes, edges, or graph-like node-edge structure.
These changes are considered breaking because an extra wrapper was placed around Metatype queries to account for possible overlap in naming conventions when querying on Metatype Relationships. While the previous version of this query layer allowed queries to begin by stating the Metatype name directly to be queried on, this query must now be wrapped within the metatypes{} object.
As mentioned earlier, edges can now be queried on using the relationships{} object, and graph-like data can be queried using the graph{} object. Additionally, when querying on Metatypes, users can now filter by relationships to other metatypes. Documentation has been updated to exhibit use cases for these newly supported behaviors.
These changes impact any code that relies on the new GraphQL query layer for data retrieval. Any code referencing the legacy query layer will not be affected. The Deep Lynx frontend does not yet reflect these changes.
This release also contains a few minor bug fixes and security updates.
Security Update
This update to Deep Lynx is considered a minor, non-breaking release.
This update contains various security changes, such as package updates and changes to how the internal application handles exceptions and errors. This update contains no changes to endpoints or UI.
Ontology Versioning
This release of Deep Lynx is considered a major, non-breaking release.
Deep Lynx is unique in its ability to store data in graph-like format and under a user-defined ontology. Data has been versioned in Deep Lynx since roughly version 0.0.5. Data ingested from sources have their changes tracked and theoretically a user could see what the data looked like at any given point in time.
However, while data was versioned the ontology it was stored under was not. This led to problems when users might have edited the ontology - removing or requiring new fields for example. These changes would not be tracked and if a user accessed data stored under an old version of a class, they might see deprecated properties, or be lacking required data. There was no way to reprocess data that had come in to fit the new ontology and the type mapping process had to be manually updated to handle changes.
To solve these problems and give users confidence in the accuracy of stored data, versioning was introduced to ontologies stored in Deep Lynx. Now when users query data they will always see the class and properties the way they existed when that data was stored. Changes to the ontology are now tracked and managed- and final approval of changes prior to application now falls to the container administrators, not all users.
https://gitlab.software.inl.gov/b650/Deep-Lynx/-/wikis/Ontology-Versioning