Skip to content

Commit

Permalink
Documentation updates
Browse files Browse the repository at this point in the history
  • Loading branch information
mikebroberts committed Oct 2, 2023
1 parent 031a023 commit 09a4482
Show file tree
Hide file tree
Showing 6 changed files with 31 additions and 29 deletions.
24 changes: 11 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,14 +3,11 @@
_A lightly opinionated DynamoDB library for TypeScript & JavaScript applications_

[DynamoDB](https://aws.amazon.com/dynamodb/) is a cloud-hosted NoSQL database from AWS (Amazon Web Services).
AWS [provides an SDK](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-lib-dynamodb/) for using
DynamoDB from TypeScript and JavaScript applications but it doesn't provide a particularly rich abstraction on top of
the [basic AWS HTTPS API](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Operations_Amazon_DynamoDB.html).
AWS [provides a fairly basic SDK](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-lib-dynamodb/) for using DynamoDB from TypeScript and JavaScript applications.

[_DynamoDB Entity Store_](https://github.com/symphoniacloud/dynamodb-entity-store) is a library which uses the JavaScript V3 SDK from AWS and provides a higher level
interface to work with.
[_DynamoDB Entity Store_](https://github.com/symphoniacloud/dynamodb-entity-store) is a TypeScript / JavaScript library that provides a slightly higher abstraction.
It is definitely **not** an "ORM library", and nor does it try to hide DynamoDB's fundamental behavior.
Because of this you'll still need to understand how to use DynamoDB from a modeling point of view - I strongly recommend
Therefore it requires that you still need a good understanding of how to work with DynamoDB in general - I strongly recommend
Alex DeBrie's [book on the subject](https://www.dynamodbbook.com/).

Entity Store provides the following:
Expand All @@ -23,7 +20,7 @@ Entity Store provides the following:
* ... but also allows non-standard and/or multi-table designs.
* A pretty-much-complete coverage of the entire DynamoDB API / SDK, including batch and transaction
operations, and options for diagnostic metadata (e.g. "consumed capacity").
* ... all without any runtime library dependencies, apart from official AWS DynamoDB libraries (AWS SDK V3).
* ... all without any runtime library dependencies, apart from the official AWS DynamoDB libraries (AWS SDK V3).

This library is named _Entity Store_ since it's based on the idea that your DynamoDB tables store one or many collections of related records, and each
collection has the same persisted structure.
Expand Down Expand Up @@ -103,7 +100,8 @@ export const SHEEP_ENTITY = createEntity(
({ name }: Pick<Sheep, 'name'>) => `NAME#${name}`
)
```
We only need to create this object **once per type** of entity in our application, and so you might want to define each of them as a global constant. Each entity object is responsible for:
We only need to create this object **once per type** of entity in our application, and usually can be stateless, so you might want to define each of them as a global constant.
Each entity object is responsible for:

* Defining the name of the entity type
* Expressing how to convert a DynamoDB record to a well-typed object ("parsing")
Expand Down Expand Up @@ -158,7 +156,7 @@ a sheep.

Note that unlike the AWS SDK's `get` operation here we get a **well-typed result**. This is possible because of the
[**type-predicate function**](https://www.typescriptlang.org/docs/handbook/2/narrowing.html#using-type-predicates) that we included when creating `SHEEP_ENTITY` .
Note that in this basic example we assume that the underlying DynamoDB record
In this basic example we assume that the underlying DynamoDB record
attributes include all the properties on a sheep object, but it's possible to customize parsing and/or derive values
from the PK and SK fields if you want to optimize your DynamoDB table - I'll show that a little later.

Expand All @@ -178,11 +176,11 @@ bob is 4 years old
shaun is 3 years old
```

Similar to `get` we need to provide the value for the `PK` field, and again under the covers Entity Store is
Similar to `getOrThrow` we need to provide the value for the `PK` field, and again under the covers Entity Store is
calling `pk()` on `SHEEP_ENTITY` . Since this query only filters on the `PK` attribute we only need to provide `breed`
when we call `queryAllByPk()`.

Like `getOrThrow`, the result of the query operation is a well-typed list of items - again by using the parser / type
The result of the query operation is a well-typed list of items - again by using the parser / type
predicate function on `SHEEP_ENTITY` .

A lot of the power of using DynamoDB comes from using the Sort Key as part of a query.
Expand Down Expand Up @@ -231,9 +229,9 @@ DynamoDB Entity Store DEBUG - Attempting to query or scan entity sheep [{"useAll
DynamoDB Entity Store DEBUG - Query or scan result [{"result":{"$metadata":{"httpStatusCode":200,"requestId":"CN2O9KUEECRUJ0BTPT1DT79G6NVV4KQNSO5AEMVJF66Q9ASUAAJG","attempts":1,"totalRetryDelay":0},"Count":1,"Items":[{"_lastUpdated":"2023-08-21T19:15:37.396Z","SK":"NAME#bob","ageInYears":4,"PK":"SHEEP#BREED#merino","breed":"merino","_et":"sheep","name":"bob"}],"ScannedCount":1}}]
```

The store's `logger` can actually be anything that satisfies the [`EntityStoreLogger`](https://github.com/symphoniacloud/dynamodb-entity-store/blob/main/src/lib/util/logger.ts) type.
The store's `logger` can be anything that satisfies the [`EntityStoreLogger`](https://github.com/symphoniacloud/dynamodb-entity-store/blob/main/src/lib/util/logger.ts) type.
The library provides a silent "No-op" logger that it uses by default, as well as the [`consoleLogger`](https://symphoniacloud.github.io/dynamodb-entity-store/variables/consoleLogger.html) shown here, but you
can easily implement your own - there's an example in the [source code comments](https://github.com/symphoniacloud/dynamodb-entity-store/blob/94a622f/src/lib/util/logger.ts) of implementing a logger
can easily implement your own - e.g. there's an example in the [source code comments](https://github.com/symphoniacloud/dynamodb-entity-store/blob/94a622f/src/lib/util/logger.ts) of implementing a logger
for [AWS PowerTools](https://docs.powertools.aws.dev/lambda/typescript/latest/core/logger/).

## Example 2: Adding a Global Secondary Index
Expand Down
4 changes: 2 additions & 2 deletions documentation/AdvancedSingleEntityOperations.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ By now you're probably pretty used to getting all of the operations for an Entit
const sheepOperations = entityStore.for(SHEEP_ENTITY)
```

The _advanced_ operations are simply the `advancedOperations` field of that:
The _advanced_ operations are the `advancedOperations` field of that object:

```typescript
const advancedSheepOperations: SingleEntityAdvancedOperations<Sheep, Pick<Sheep, 'breed'>, Pick<Sheep, 'name'>> =
Expand Down Expand Up @@ -67,7 +67,7 @@ This contains the unparsed return values from DynamoDB.
### Unparsed results for collection requests

In [chapter 4](SingleEntityTableQueriesAndTableScans.md) you learned about DynamoDB Entity Store's _entity filtering_ behavior for queries and scans.
For the _standard_ operations Entity Store simply discards any results for entities other than those in scope.
For the _standard_ operations Entity Store discards any results for entities other than those in scope.

For the _advanced_ versions of the query and scan operations Entity Store instead puts any results for other entities into a `unparsedItems` field of the operation result.
If there aren't any unparsed items this field isn't defined, but if there are it's an array of raw item results from DynamoDB.
Expand Down
16 changes: 10 additions & 6 deletions documentation/Entities.md
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ This occurs during many operations, including `put` and `get`.
> If your table only has a Partition Key, and doesn't have a Sort Key, see the section _PK-only Entities_ below.
Each of the `pk()` and `sk()` is passed an argument of type `TPKSource` or `TSKSource`, as defined earlier, and should return a string.
Each of `pk()` and `sk()` is passed an argument of type `TPKSource` or `TSKSource`, as defined earlier, and should return a string.
Let's go back to our example of `Sheep` from earlier. Let's say we have a particular sheep object that is internally represented as follows:
Expand Down Expand Up @@ -119,9 +119,13 @@ Notice that the parameter types here are precisely the same as those we gave for

A couple of less common scenarios.

First - it's usually the case that the value of your PK / SK attributes will always contain all the actual field values defined in `TPKSource` / `TSKSource`, but that's not always true. In such situations just specify the full set of fields that **do** drive your PK / SK values, and then you can return whatever you like from the generator functions.
First - it's usually the case that your `pk()` and `sk()` functions return a context-free value based on their parameter types - `TPKSource` / `TSKSource`.
In such cases your entity will likely be a stateless object.

Second - if either of your Partition Key or Sort Key attributes are also being used to store specific values of your table (in other words your table **does not** have separate '`PK`' and '`SK`' style attributes configured) then you can just return field values unmanipulated from your generator functions, **but you still need to implement the functions** .
Sometimes though the value you want to generate for `pk()` and/or `sk()` will be partly or solely dependent on _some other value(s)_ - e.g. a configuration value.
In these situations you may need to create a stateful entity object (which you pass to entity store in `for(entity)`.

Second - if either of your Partition Key or Sort Key attributes are also being used to store specific fields of your entity (in other words your table **does not** have separate `PK` and `SK` style attributes configured) then you can just return field values unmanipulated from your generator functions, **but you still need to implement the functions** .

E.g. say you have an internal type as follows:

Expand Down Expand Up @@ -162,7 +166,7 @@ This might result (depending on table configuration) in the following object bei

The _metadata_ fields - `PK`, `SK`, `_et`, `_lastUpdated` - are controlled through other mechanisms, but if you want to change what _data_ fields are stored, and what values are stored for those fields, then you must implement `convertToDynamoFormat()`.

The type signature of `convertToDynamoFormat()` is: `(item: TItem) => DynamoDBValues`, in other words it receives an object of your "internal" type, and must return a valid DynamoDB Document Client object _(`DynamoDBValues` is simply an alias for `Record<string, NativeAttributeValue>`, where [`NativeAttributeValue` comes from the AWS library.](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-util-dynamodb/TypeAlias/NativeAttributeValue/))_
The type signature of `convertToDynamoFormat()` is: `(item: TItem) => DynamoDBValues`, in other words it receives an object of your "internal" type, and must return a valid DynamoDB Document Client object _(`DynamoDBValues` is an alias for `Record<string, NativeAttributeValue>`, where [`NativeAttributeValue` comes from the AWS library.](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-util-dynamodb/TypeAlias/NativeAttributeValue/))_

You may need to implement `convertToDynamoFormat()` in situations like the following:

Expand All @@ -177,7 +181,7 @@ If you implement `convertToDynamoFormat()` you'll likely also need to consider a

Each _Entity_'s `parse()` function is used during read operations to convert the DynamoDB-persisted version of an item to the "internal" version of an item. As with `.convertToDynamoFormat()`, since DynamoDB Entity Store uses the [AWS Document Client](https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/Package/-aws-sdk-lib-dynamodb/) library under the covers such parsing is less about low-level type manipulation and more about field selection and calculation.

As described above for `.convertToDynamoFormat()` - with DynamoDB Entity Store's default behavior the persisted version of object contains precisely the same fields as the internal version, and so in that case parsing consists of (a) removing all of the metadata fields and (b) validating the type, and returning a type-safe value.
As described above for `.convertToDynamoFormat()` - with DynamoDB Entity Store's default behavior the persisted version of object contains precisely the same fields as the internal version, and so in that case parsing consists of (a) removing all of the metadata fields and (b) validating the type, returning a type-safe value.

#### Type Predicate Parsing

Expand All @@ -200,7 +204,7 @@ Our `parse()` implementation for `SHEEP_ENTITY` is then defined by calling `type

#### Advanced Parsing

If simply performing a type check isn't sufficient for an _Entity_, then you need to implement a custom `EntityParser<TItem>` function. `EntityParser` is defined as follows:
If just performing a type check isn't sufficient for an _Entity_, then you need to implement a custom `EntityParser<TItem>` function. `EntityParser` is defined as follows:

```typescript
type EntityParser<TItem> = (
Expand Down
4 changes: 2 additions & 2 deletions documentation/GettingStartedWithOperations.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ interface AllEntitiesStore {

There are three methods on this interface to match the three top-level groupings of operations you can perform:

* Operations on one type on entity
* Operations on one type of entity
* Operations on multiple types of entity
* Transactional operations (single or multiple entities)

Expand Down Expand Up @@ -142,7 +142,7 @@ If you need the [AWS API version of what `put` returns](https://docs.aws.amazon.
## Get

The two methods for get-ting items both call [_GetItem_](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_GetItem.html) under the covers and both have the same parameters.
The difference between is what happens if the item doesn't exist.
The difference between them is what happens if the item doesn't exist.
I'll get onto that below when I cover the return value.

First, the signatures:
Expand Down
2 changes: 1 addition & 1 deletion documentation/Setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -254,7 +254,7 @@ When you make calls to the operations functions in Entity Store the library will

The `defaultTableName` property is useful if you have a situation where _most_ entities are in one table, but you have a few "special cases" of other entities being in different tables.

You can create a `MultiTableConfig` object:
To create a `MultiTableConfig` object:

* Use the multi-table specific `createStandardMultiTableConfig()` support function if all of your tables use the same "standard" configuration described earlier
* Build your own configuration, optionally using the other support functions in [_setupSupport.ts_](../src/lib/support/setupSupport.ts).
Expand Down
10 changes: 5 additions & 5 deletions documentation/SingleEntityTableQueriesAndTableScans.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,15 +95,15 @@ That's where the query-one-page versions of the query methods come in.

Before proceeding if you don't know how DynamoDB pagination works you'll probably want to read [the AWS docs](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.Pagination.html).
For example, results aren't page-numbered, or saved in DynamoDB, and you can't go backwards.
Instead to get all pages, apart from the first page, you provide the key of the last element returned on the previous page and DynamoDB calculates just-in-time what the next page evaluates to.
Instead to get all pages, apart from the first page, you provide the key of the last element "evaluated" on the previous page and DynamoDB calculates just-in-time what the next page should be.

`queryOnePageByPk` takes an optional argument of type [`QueryOnePageOptions`](https://symphoniacloud.github.io/dynamodb-entity-store/interfaces/QueryOnePageOptions.html).
This has the following optional properties:

* `limit` - the maximum number of items to _evaluate_. What does _evaluate_ mean? Well, often it means the maximum number of items to return, but it's complicated. :) See [the AWS docs](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html#API_Query_RequestParameters).
* `exclusiveStartKey` - The key of the first item to read. More on that in a moment. If you are requesting the first page, leave as undefined.
* `scanIndexForward` - same as for `QueryAllOptions`
* `consistentRead` - same as for `QueryAllOptions`
* `scanIndexForward` - same as for `QueryAllOptions`, described earlier on this page
* `consistentRead` - same as for `QueryAllOptions`, described earlier on this page

Let's look at an example. Say we have 2 merino sheep in our table.
We can run the following to just get one sheep at a time back from DynamoDB, by specifying the `limit` option with `queryOnePageByPk`:
Expand All @@ -114,7 +114,7 @@ const result = await sheepOperations.queryOnePageByPk({ breed: 'merino' }, { lim

> You don't need to specify a `limit` - if you don't DynamoDB will return up to 1MB of results. I'm just specifying it here for clarity of example.
This time the result isn't simply a list of items, it's an object of type [`OnePageResponse<TItem>`](https://symphoniacloud.github.io/dynamodb-entity-store/interfaces/OnePageResponse.html).
This time the result isn't a list of items, it's an object of type [`OnePageResponse<TItem>`](https://symphoniacloud.github.io/dynamodb-entity-store/interfaces/OnePageResponse.html).
This has two fields:

* `items: TItem[]` - the parsed list of results for this page, using the same parsing rules as query-all methods
Expand Down Expand Up @@ -328,7 +328,7 @@ You should **only** use this command if you're happy to read your entire table!

`scanOnePage()` returns a page of results, rather than the entire table.

Paging works in exactly the same as for querying, so read the _Query-one-page by Partition Key_ section above if you haven't done so already.
Paging works in exactly the same way as for querying, so read the _Query-one-page by Partition Key_ section above if you haven't done so already.

`scanOnePage()`'s options are of the following type:

Expand Down

0 comments on commit 09a4482

Please sign in to comment.