-
Notifications
You must be signed in to change notification settings - Fork 16
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add new serverless-framework-sqs-dynamodb kit #709
base: main
Are you sure you want to change the base?
Conversation
- initialize package.json - add .gitignore with v1 values - add .nvmrc and set to node16 - add init README with some scripts - add start, build, and infrastructure scripts - install deps for establishing and running the project initially
✅ Deploy Preview for starter-dev canceled.
|
This pull request is automatically being deployed by Amplify Hosting (learn more). |
- add job generator - add job processor - add dynamodb stream config and handlers - add sqs utils
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome to have this as a starter! Left some food for thought, but no show-stoppers.
# configures DynamoDB resource for persistent storage | ||
dynamodb: | ||
image: amazon/dynamodb-local | ||
hostname: dynamodb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small remark, this attribute seems redundant as the service name is used as hostname by default. It stood out to me as the other services don't have it specified.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is, but isn't. If someone were to change the docker name, which they totally can, but this attribute allows me to reference it later in the doc for the db-admin feature so it's necessary.
"name": "serverless-framework-sqs-dynamodb", | ||
"version": "1.0.0", | ||
"description": "A starter kit utilizing the Serverless Framework in conjunction with AWS DynamoDB & SQS.", | ||
"author": "Dustin Goodman", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe you don't want to put yourself up as "THE" author.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well I am THE author of this kit 😉 - I do want to start introducing authors to kits though for better attribution to make this project more open source friendly and crediting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TapaiBalazs added some good context. I was under the impression that we didn't have it on the other kits, due to it being kits of us all. But maybe I'm just a bit too much of a communist 😉
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just read more on the official docs on this: https://docs.npmjs.com/cli/v9/configuring-npm/package-json#people-fields-author-contributors
Looks like we need author & contributor fields. IMO the person who defined the kit elements is the author and everyone who helped make it what it is are the contributors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand your point, but the generator script currently does the following:
- User runs the generator
- User selects which kit they want, and then provide a name
- the script downloads the folder of the starter kit from github
- the package.json "name" and "version" properties get overwritten
Therefore if Jane Doe generates this starter kit, their package.json will contain you as the author in their project. This is the reason why other kits does not have the author field in the package.json file.
Of course, the generator script can be updated to remove the author from the package.json, we could even add a third prompt, so the user could set their own author field during script run.
Update: we could update the script to get the user name and e-mail from their git configuration and set them as the author field.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm... this raises a good discussion point. Something I've been pondering is if we put a "starter-dev.json" or equivalent file in each kit that we use instead of the package.json for the CLI. In general, here are the problems I'm thinking about and wanting to solve:
- Are contributors properly given credit for their submissions and work regardless of the initiation point?
- Can we support more non-Node based starter kits? Right now the Deno kit has a bit of a hack to be included but what if we started to include non-JavaScript tooling as well? We need to maybe rethink our CLI strategy in this regard.
I'll be honest that I'm less concerned about the generated kit containing the original authors details. I think that's actually appropriate in this case. Renaming was to allow others to have a better project name custom to what they were doing out of the box. I think this other metadata file I'm thinking about is maybe the solve though so maybe this is a moot point.
}, | ||
}, | ||
'esbuild': { | ||
packager: 'yarn', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suppose there's no simple way to set this to the package manager used to install the deps?
My concern is that not all devs have yarn
locally, but all node installations come with npm
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll note this in the README in the serverless section. That's a really good shoutout. I was developing using yarn so I needed it set to this and my instructions recommend it.
export const handler: DynamoDBStreamHandler = async (event) => { | ||
console.log('Example Stream Processor Handler initiated'); | ||
|
||
const recordHandler = async (record: DynamoDBRecord) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seeing this method does not rely on the handler
s state, we could move it out of the body. It's a micro-optimization, I'm aware, but it might set a good example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a great recommendation. I concur.
@@ -0,0 +1 @@ | |||
export const DEFAULT_CACHE_TIME = 300_000; // 5 minutes = 1000 ms/s * 60 s/min * 5 min |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Underscore thousand separators, nice!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks alright, but I have some comments and questions
@@ -38,7 +38,10 @@ export { ShareIcon } from './ShareIcon'; | |||
export { LinkedinIcon } from './LinkedinIcon'; | |||
export { QwikIcon } from './QwikIcon'; | |||
export { SolidJsIcon } from './SolidJsIcon'; | |||
export { DenoIcon } from './DenoIcon.tsx'; | |||
export { DenoIcon } from './DenoIcon'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice catch
|
||
### General Commands | ||
|
||
- `build` bundles the project using the serverless packaging serverless. The produced artifacts will ship bundles shipped to AWS on deployment. You can optionally pass `--analyze <function name>` to run the bundle analyzer and visualize the results to understand your handler bundles. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using the serverless packaging serverless.
This feels a bit strange wording to me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will ship bundles shipped to AWS on deployment
This too feels a bit strange
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LOL - I was tired when I wrote these. Will fix. Good catch.
- `deploy` ships the project to the configured AWS account using the Serverless Framework CLI command. | ||
- `start` runs the `serverless-offline` provided server for local development and testing. Be sure to have the local docker infrastructure running to emulate the related services. | ||
- `test` runs `jest` under the hood. | ||
- `lint` runs `eslint` under the hood. You can use all the eslint available command line arguments. To lint the entire project, run `yarn lint .`, or equivalent. You can affix `--fix` to auto-correct linting issues that eslint can handle. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can use all the eslint available command line arguments
I think You can use all the available eslint command line arguments
would be the word order
|
||
This README uses `yarn` for commands. If you're using `npm` or `pnpm`, utilize the equivalent version of the commands. | ||
|
||
1. Create a `.env` file: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you need to have a set up, configured and logged in AWS account before these steps? 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Technically speaking - no. This kit works FULLY offline and does not require AWS at all. That being said, for deployment and other things, it might be nice later. Will add notes.
|
||
### Infrastructure Commands | ||
|
||
- `infrastructure:up` creates docker container and related images and runs them in the background. This should only be needed once during initial setup. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on the docker-compose.yml the infrastructure creates 4 containers (dynamodb, dynamodb-admin, sqs, and redis), so in these instructions I think docker containers
should be used in the sentences.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LOL - good catch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't mean to be annoying on this, but that's 4 containers running 4 images. As far as I understand docker: 1 container runs 1 image. On your screenshot you have 4 running containers.
What you call a container with the dropdown that's just UI grouping, if you run docker ps
you will see all four of those as separate containers with their separate container ID
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I interpret the screenshot as 1 container, 4 images but I won't admit to being a Docker expert. I will ping a DevOps buddy to confirm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dustin's friend here. @TapaiBalazs is correct. If you look at the docker-compose file, you'll see that each one of those containers are defined with their own image and configuration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TIL - thanks @mpstein ❤️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fix pushed
import { getCacheKey } from './getCacheKey'; | ||
|
||
export const destroy = async (key: string) => { | ||
const item = await deleteItem(process.env.TECHNOLOGIES_TABLE, { id: key }); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does the deleteItem
method return the deleted entry? 🤔
hat's what the unit test validates, but it feels strange for me after postgresql, where delete did not return anything, just information about how many rows were affected and the rest endpoint returns with OK.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My implementation has chosen to return the record. It doesn't necessarily have to do this, but DynamoDB does return it, therefore, I do.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Understood, I just wanted confirmation on the dynamodb implementation.
return newTechnology; | ||
} | ||
|
||
return null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we suppress an error here? If yes, shouldn't we log it? Should this return null
ever be reached? 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No error suppression so this is good as is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was now able to put together a mental model of the structure and understand that this is not the handler, but a level down.
@@ -0,0 +1,7 @@ | |||
/** | |||
* Utility function for checking where functions are being run locally via serverless offline |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
* Utility function for checking where functions are being run locally via serverless offline | |
* Utility function for checking if functions are being run locally via serverless offline |
|
||
afterAll(async () => { | ||
ddbMock.restore(); | ||
await removeFromCache(getCacheKey('87af19b1-aa0d-4178-a30c-2fa8cd1f2cff')); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where do you set/hardcode this cache key when setting the item into the cache? 🤔
I also think that this uuid string should be exported as a MOCK_TECHNOLOGY_ID
constant and use across tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These were just bad copy/pastes from other tests I wrote. They aren't actually needed.
return responseHelper(StatusCodes.NOT_FOUND, null); | ||
} | ||
|
||
return responseHelper(StatusCodes.OK, technology); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this method invalidate the cache entry for the item's id? 🤔
If the GET technology/:id enpoint is cached for 5 minutes, and then an update occurs, the previous value gets returned back in the GET endpoint. I guess same goes for the DELETE and the CREATE methods as well, namely the caching of the all-technologies endpoint should also be invalidated if that is cached. 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I handle it a layer deeper in the models/technology/update
. it is doing the invalidation but the invalidation is handled by the model class that determines whether a value is from cache or database instead of the handler layer. Separation of concerns was my reasoning.
Type of change
Summary of change
Adds a new starter kit to the project that utilizes a fully serverless AWS stack. This kit includes:
Checklist
[ ] This fix resolves #[ ] I have verified the fix works and introduces no further errors