Stotles work sample assignment

Additional Comments

  • Filter by Buyer:
    • I've used the antd select component for this filter and specifically used the single select to match the requirement listed in the task.
    • However, this is the kind of filter that is better suited to a multi-select component instead as users are likely to want to filter by multiple buyers.
    • Given more time in a real-world scenario, I would have used the multi-select component and adjusted the api endpoint to take multiple buyer ids.
    • This feeds into a comment left on the api side around the need for a SQL query builder in order to make the api filters scalable.
  • Performance with search/filters:
    • Currently, the search and filters are calling the API immediately upon a change to either field.
    • Whilst this is less of an issue for the select, the text search field is not scalable as it is making a request for every keystroke.
      • One solution to this is to bundle up the search/filter fields into a form and only make the request when the form is submitted.
      • A good reason for this is the accessibility benefits of using a form for screen readers
    • Another solution is to use a debounce function to delay the api call until the user has stopped typing for a certain amount of time.
      • You could wrap this around all of the fields, but depending on the time delay, this could be frustrating for the user.
  • Edge Cases:
    • At the moment, there are a few edge cases that I discovered whilst working on this task (e.g. multiple types of tender, a one off GBP/day value)
    • Given more time/in a production environment, I would have liked to have had error handling around unexpected data types and values, especially in a system where data is being pulled in from external sources.
    • Surfacing these errors in a way that is helpful to the end user would also be a priority here.
  • Testing:
    • In the push to complete the task within the 3 hour timeframe, I've forgone writing tests for this task.
    • However, given time, I would have liked to have written unit tests:
      • For the components I would have used react testing library to confirm that the table is rendering data correctly per the possible types of data (e.g. currencies/ stages/ dates). In addition I'd have looked to write tests around handling unexpected data types/combinations of data (e.g. a CONTRACT with a closed date rather than an awarded date)
      • For the api unit tests, I would have looked to mock out the database layer and test that the api can handle both varied user inputs and unexpected data coming from the database.
    • I would also have liked to have written some e2e tests to test the api endpoints and the client side components together.
      • For this I would have used cypress to test the main user journeys through the app and confirm that the data is being rendered correctly.
      • I would have either have chosen to mock the database response for this, or have a test database that can be seeded with consistent data for the tests to run against.
      • These would help give confidence that future changes to functionality aren't regressive and causing problems with existing flows.
  • Scaling:
    • At the moment, on page load, there is an api request for the entire list of buyers from the database. However, this data is less likely to be updated as regularly as the procurement data.
      • As such, I would look to cache this data on the client side with a cache expiry time of 24 hours (or whatever is deemed appropriate for the use case).
      • In addition, I would look to add a cache layer to the api to reduce the number of requests to the database.
    • Currently, the text search field is just using a base SQL query to search the title and description fields. This is not scalable as the number of records grows.
      • In a production environment, I would look to use a search engine such as ElasticSearch to apply an index to these fields and improve the search performance.
    • I also resolved a key warning on the generation of the table rows. As the dataset grows and additional functionality is added to the table, this will help prevent unnecessary re-renders of tables containing large amounts of data.

Getting started

This sample codebase consists of a separate client & server code.

It's set up in a simple way to make it as easy as possible to start making changes, the only requirement is having recent versions of node & npm installed.

This is not a production ready configuration (nor production ready code), it's only set up for easy development, including live reload.

To run the client bundler:

cd client
npm install
npm run dev

The processed code will be available at http://localhost:3001

To start the server:

cd server
npm install
npm run dev

The server will be available at http://localhost:3000 - the page is automatically configured to use the assets served by vite on port 3001.

You should see something similar to this page:

Search page

Disabling/Enabling TypeScript

If you prefer to completely disable TypeScript for a file, add // @ts-nocheck on the first line. If on the other hand you'd like to enable strict type checking, modify tsconfig.json according to your needs.

Note that you can import plain JavaScript files that won't be fully typechecked.

Browsing the database

You should start by looking at the migration in ./migrations folder. If you prefer to browse the DB using SQL, you can use the sqlite command line (just run sqlite3 ./db.sqlite3) or any other SQL client that supports sqlite.

If for any reason the database becomes unusable, you can rebuild it using ./reset_db.sh script`.

The task

All the instructions are available here.