A web application for finding, browsing and reviewing all kinds of music. A user can review any song published to Spotify. We believe this is great for both user experience and an expansive dataset that is available for us.
The site is available at: http://it2810-29.idi.ntnu.no/project2
You need to have a Node.js and PostgreSQL installed.
# Clone the repository
git clone https://git.ntnu.no/IT2810-H25/T29-Project-2.git
cd T29-Project-2
# Create a PostgreSQL database
createdb -U postgres rate_your_songs
# assuming username "postgres"The course staff told us to commit our .env-file as-is, so it is included in the repository.
If you need to set it up yourself, create a file named .env in the root directory, and copy the contents of the .env.example file into it. Fill in the environment variables with its appropriate values.
# Install dependencies
npm install
# Migrate database
npm run migrate:fixtures
# NOTE: You can only run fixtures once without resetting the database.
# If you have already migrated with fixtures before, migrate without fixtures:
# npm run migrate
# Start the development server
npm run dev
# Visit http://localhost:5173/project2/
# If you want to run frontend and backend separately, you can do the following:
# In terminal A, run the backend:
# npm run dev:api
# In terminal B, run the frontend:
# npm run dev:web# Run backend and frontend tests
npm run testTo run end-to-end tests, you need a separate test database.
# Create a PostgreSQL database for tests
createdb -U postgres rate_your_songs_test
# assuming username "postgres"Then, you can run the tests:
# Install dependencies for end-to-end tests
npm run test:e2e:install
# Run end-to-end tests
npm run test:e2e# This is automatically done when you migrate the database.
npm run generateThis following section is not required for running the project as the types are commited, but needed for development when you make changes to the GraphQL schema or queries/mutations.
# In terminal A:
npm run dev:api
# In terminal B:
npm run codegen # (You can safely stop the API server in terminal A after this is done)First you need the project opened on the VM:
## SSH into the VM
ssh <username>@it2810-29.idi.ntnu.no
# Clone the repository if you haven't already
git clone https://git.ntnu.no/IT2810-H25/T29-Project-2.git
cd T29-Project-2Then, you can build and run the backend:
# Install dependencies
npm install
# If the project is already running on the VM, stop and delete the existing prosess first
npm run vm:stop
npm run vm:delete
# Build and start the project
npm run build:api
npm run vm:start
# The API should now be running at http://it2810-29.idi.ntnu.no:4000
# To make sure the project starts on reboot, you can run:
npm run vm:saveFrom your local machine
# Install dependencies
npm install
# Build the frontend
npm run vite:build
# Copy the frontend to a temp folder on the VM
scp -r dist/web <username>@it2810-29.idi.ntnu.no:/tmp/
# SSH into the VM
ssh <username>@it2810-29.idi.ntnu.no
## Delete existing frontend files and move new ones into place
sudo rm -r /var/www/html/project2
sudo mv /tmp/web /var/www/html/project2
# The frontend should now be running at http://it2810-29.idi.ntnu.no/project2Our project is separated into two main parts: src/api and src/web. This section will explain the purpose of them and where to find some important code.
This is where our backend is located. it contains all GraphQL-related code (except client queries/mutations), database code and business logic.
Some important folders and files:
- You can find our database-related code in
src/api/db/as well as our repositories insrc/api/modules/<module>/<module>-repository.ts.- The
fixtures/folder contains logic for seeding the database with initial data. It creates songs, users and reviews for testing purposes. These are crafted in a way to try to mimick real-world data as much as possible.
- The
- You can find our GraphQL-related setup in
src/api/graphql/,src/api/server.ts, andsrc/api/modules/core.ts. - You can find all our business logic in
src/api/modules/<module>/. Each module follows a domain-driven file structure where:<module>-types.tscontains domain types for the module (separate from our infrastructure types generated by Prisma and interface/client types insrc/types)<module>-resolvers.tscontains GraphQL resolvers for the module imported intosrc/api/graphql/resolvers.ts<module>-loader.tscontains data loaders for the module<module>-service.tscontains business logic for the module<module>-repository.tscontains database queries for the module
- Our types are stored in
src/types/and in<module>-types.tsdescribed above./__generated_/contains types resolver types generated from our GraphQL schema bycodegen.
test/contains our backend test files.auth/contains our authentication logic. This is described in more detail in the "Authentication" section below.
This is where our frontend is located. It contains all the React components, pages and styling.
Some important folders and files:
- You can find our GraphQL client setup in
client/, including queries and mutations inclient/queries.tsandclient/mutations.ts. - You can find our global state management in
state/, including Redux store setup instate/store.tsand slices instate/slices/. - You can find our session management in
session/, including the customuseSessionhook insession/use-session.ts. - You can find our components in
components/, including components from Catalyst UI incomponents/catalyst-ui/. Local components are preferably stored in the same folder as the page using them. - You can find our pages in
pages/with their local components. - Our utilities are stored in
utils/. test/contains our frontend test files.
We implemented a custom authentication system using JWT session tokens stored in HTTP-only cookies. This design keeps user credentials secure and helps prevent XSS attacks.
When a user logs in, the server generates a signed JWT containing their user data. The token is sent to the client and stored in an HTTP-only cookie, making it inaccessible to browser-side JavaScript.
The backend (src/api/) handles all authentication logic by issuing and validating JWTs, and attaching session tokens to response cookies. The frontend (src/web/) automatically includes these cookies with subsequent requests to authenticate the user.
During each request, the backend context extracts the authenticated user from the token and attaches it to the GraphQL request context. Resolvers can then check if a user is authenticated. We also use this to associate created reviews with the correct user. See example in src/api/modules/review/review-resolvers.ts#createReview.
On the frontend, the custom useSession hook provides access to the current user’s session and authentication state. It simplifies session management and makes it easy to conditionally render UI based on whether the user is logged in.
This section is meant to cover the tools we as a group have chosen to use, and why we chose them. It may not cover every single third-party LOC we have used, but it should cover the major ones. It does not cover the tools directed by the course staff, such as using Vite and GraphQL.
- PostgreSQL: The go-to relational database. We chose this mostly because of our prior experience with it. Reliable and performant.
- Prisma: An ORM for Node.js and TypeScript. We chose this because it has a great developer experience, good TypeScript support and makes database migrations easy.
- Tailwind CSS: A utility-first CSS framework that we chose for its flexibility and ease of use. As a group, we have a lot of experience with Tailwind, so it allows us to create responsive designs quickly and efficiently.
- CatalystUI: A component library built on top of Tailwind CSS. They look great OOTB and are fairly easy to customize, though not headless. We chose this over shadcn/ui because we wanted to try something new.
- urql: A GraphQL client for React. We chose this because it is lightweight, has a simple API and good TypeScript support with
@graphql-codegen. - zod: A TypeScript-first schema validation library. We chose this because of its excellent TypeScript support, ease of use and performance. It allows us to validate data on the server effectively.
- Redux: We chose specifically Redux as it is the go-to state management library for React, similar to PostgreSQL for databases.
- Apollo Server: A popular GraphQL server implementation. We chose this because of its ease of use. As a group, we do not have much prior experience with GraphQL servers, so we did not have any strong preferences.
- Express, cookies: We needed Express as an Apollo Server middleware in order to handle cookies for authentication. We chose Express because it is the most popular Node.js web framework and has great middleware integrations with Apollo Server.
- bcrypt: A library for hashing passwords. We chose this because it is a well-known and trusted library for password hashing.
- jose: A library for handling JWTs. We chose this because it is a modern library with good TypeScript support and is actively maintained.
- @spotify/web-api-ts-sdk: A TypeScript SDK for the Spotify Web API. We chose this because it is the official SDK from Spotify, has TypeScript support and makes it easy to interact with the Spotify API.
- ESLint: A linter for JavaScript and TypeScript. We chose this because it is highly configurable, has a large ecosystem of plugins and is widely used in the industry.
- Prettier: A code formatter for JavaScript and TypeScript. We chose this because it enforces a consistent style across our codebase, making it easier to read and maintain.
- Vitest: A testing framework for Vite projects. We chose this because it is designed to work seamlessly with Vite, has a simple API and good TypeScript support.
- Playwright: An end-to-end testing framework. We chose this because it supports multiple browsers, has a simple API and good TypeScript support.
- Husky, lint-staged: Tools for running scripts on Git hooks. We chose these to enforce code quality by running linters and formatters on staged files before commits.
We have a very strict ESLint config with its goal being to have a consistent code style across the entire codebase, making the code easier to read and maintain. To enforce this, we have installed husky and lint-staged to run linting and formatting on staged files before each commit automatically. This ensures that we never forget to lint and format our code. You can find the Husky configuration in .husky/pre-commit and the lint-staged configuration in .lintstagedrc.
In addition to this we have also set up a GitHub Actions workflow and a self-hosted workflow runner that runs tests and linting on each pull request and commit to main. This helps us catch any code regressions, and together with Husky has close to removed all chance of accidentally committing invalid or unformatted code. You can find documentation for this in docs/workflow-runner.md. You can find the workflow file in .github/workflows/ci.yml.
- To reduce the number of pages loaded, we ensured that users can reach any page on our site from any other page with just one or two clicks. We also focused on a simple design without superfluous content, making it easier for users to find what they are looking for on the first try. This approach reduces unnecessary data transmissions and makes the product more sustainable.
- We decided on a simple dark theme with a limited color palette. This reduces electricity usage by requiring screens to emit less light and also decreases the overall file size of the page.
- The Spotify API uses content delivery networks for data delivery. This means that the data fetched from the API is delivered from a point closer to the user. This reduces long distance data transmissions across the network, and is therefore a more sustainable choice for our API than other options.
- To reduce data transmission, we limit how much content is displayed on each page by using pagination. This ensures that only the data required for the current view is loaded initially, and additional content is fetched only when the user opts to see more.
We very rarely used any AI tools to write code or text in this project. We have used AI tools to help brainstorm approaches to problems and to get suggestions/help with debugging code. You can locate the sections we used AI tools to write code by searching for comments including AI DECLARATION. We chose this route to ensure that we fully understand our codebase and discourage shortcuts that may lead to technical debt in the future, while still leveraging AI tools to assist us in our problem-solving process and utilizing their capabilities responsibly where we deem it appropriate.
