The problem, from the README, is as follows.
For this challenge you will consume the Giant Bomb API to create an application that will allow a user to search games and “rent” them. The application should consist of at least two unique pages (`search` and `checkout`). Your view should display the game thumbnail and title, and the rest is up to you. You can use any language and or framework you’d like.
We’ve got some getting-started links to the Giant Bomb API, and an invitation to questions.
I’d like to have questions, but the problem statement is so open ended that it seems like every
question can be answered with the rest is up to you
.
That said, let’s ponder what we know. We need at least two pages:
- A
search
page - A
checkout
page
The existence of a checkout page means we need to be able to store a collection of games the user would like to “rent”.
Once they’ve “rented” the game, it’s reasonable to then want a list of games the user has rented, with some kind of rental-based data attached.
Right off the bat I can think of two things I’d like to either avoid doing, or do last, specifically because it’s more work and might push things outside of the 3-4 hour time box this project is meant to take, without being specifically asked for. Also, the goal of the project isn’t to have a fully functioning “product”, but to display my development process.
At a high level, I’d like to achieve the following goals:
- Use re-frame, as that’s what’s in use at Gravie
- Spend as little time on getting set up, and on boilerplate, as possible
- First achieve the explicitly asked for goals, then add flourish.
The primary challenge here is that this problem is to build a frontend app, and the majority of my experience is with backend, so I need to learn many new things, and learn them quickly.
Given this, it will be challenging to know, ahead of time, the order in which to build things, so I will likely wander down various dead ends or false starts.
Since I’m trying to reduce the time I spend getting set up, and on boilerplate, I’m going to
begin by using the “simple” example app in the re-frame repo, found here. This should give me all
of the base files I need – the deps.edn
, the shadow-cljs.edn
, and so on – to have a running
app, and get me to writing the actual app code.
One change I’ve made to this initial deps.edn
file is to include the deps for the
re-frame-10x
library, as I’d like some more debugging insight.
I don’t know any frontend framework well enough to get something spun up quickly, so since I’ll need to learn something new, I’m choosing re-frame because:
- I’ve been interested in re-frame for a while
- Gravie uses re-frame, so, presuming I get hired, learning it now will help jumpstart my first couple weeks.
Now that we’ve got a basic app running, and before I start writing frontend code, I want to take an initial go at designing a data model. This is, currently ([2023-08-25 Fri]), being built on a plane, so I’m going to wait to design the data model for Giant Bomb results until I have wifi again.
With that said, here’s what we know we need to store somehow
- A search term
- The search results
- The items they’d like to rent
- The items they’re currently renting
I’m going to start with the following keys in the app-db:
search-term
- the current search term, a string
search-results
- a collection of results returned by the Giant Bomb API, likely a vector of maps, though that could change
cart
- a collection of games that the user wants to rent, where each item in the collection
has the same structure as the items in
search-results
rented-items
- a collection of games that the user is currently renting, sharing a similar
structure to the
search-results
andcart
collections, though likely with a couple extra fields to represent things like a due date. api-key
- The API key to use for making Giant Bomb API requests. There are various ways to handle authenticating calls with Giant Bomb – have users as part of the data model, and users have api-keys, or work with Giant Bomb to get an API key for our app and every call is authenticated with that, and so on – but for simplicity’s sake, and because this repo is public, I’m going to have an input field that user’s will paste their API key into, and which goes away on page reload. That way I don’t have to worry about leaking my own API key.
Now that the initial data model is sorted out, we can create our search and API key inputs. The search won’t make requests, per se, since I’m on an airplane, but then we’ve got something on the page.
This was essentially the same as the basic search input field.
Now that we’ve got a basic search input field, and an api key input field, let’s hit the Giant Bomb api.
First up, GB supports 200 requests/hour/resource, which could make initial development tricky, since I wouldn’t be surprised if I made more search requests than that.
From this, I’m going to delay (never build?) auto-complete support, since I don’t want to blow my API limit.
That said, let’s wire in the GB api, and to make our remote requests with effects handlers, let’s
use re-frame-fetch-fx
.
Aside from needing to add this as a dependency, we’ll also need the following things for the search to work:
- Getting the API key from the DB for the request
- Putting the query string into the request
- Putting the search results into the DB afterwards
- Some kind of error handling
These both do the fetch a remote resource
job, so how to pick? The answer is I don't know,
let's ask the internet
, which lead me to this stackoverflow question, which identified Fetch
as the newer way of doing this, and which leverages Promises, which are apparently now the
prefered way to do asynchronous operations in JavaScript
. My (somewhat limited) experience with
the JS world tells me to follow the herd as much as possible, so I’ll go with Fetch for now.
So it turns out that both Fetch and XHRIO require CORS to be enabled if javascript is to be allowed access to the results from cross-site requests, which is something I’ve just learned.
Furthermore, after digging around, and inspecting the network tab, it looks like GiantBomb does not support CORS.
As such, we’ll have to implement a proxying backend to leverage the same-site origin security, thus making the API call results available to our CLJS code, and bypassing CORS.
I wonder what the quickest way to do this is…
I had originally wanted to make a frontend only project, hoping to avoid spending time on doing anything with the backend. This original focus, when met with the CORS issue, lead me to spend some time trying to build, or find, a transparent proxy that could take a request for localhost and forward it to giantbomb.
First I tried out using CLJS and Node’s HTTP(S)
modules, but wasn’t particularly taken by the
things one has to do to handle requests. There were some node packages that maybe could have
done this, but I neither wanted to trust the code blindly, nor audit it before use.
So I went to see if there was an existing OSS project that was doing this, and I spent a little
time experimenting with tinyproxy
and socat
. Tinyproxy had some glitches, and also couldn’t
reverse proxy to HTTPS URLs, and socat was super promising, but I couldn’t quickly figure out
how to set the Host
header, which is necessary for the Fastly routing that GiantBomb uses.
I then had a realization that building a “proxy” for these requests is just building a backend server, and that, blinded as I was to try to only write frontend code, I’d gone off on a wild goose chase.
So the simplest thing remains to be writing a dead simple backend server in Clojure to handle all requests that need to hit giantbomb.
A barebones backend needs:
- A server to accept requests from the network
- A handler to handle those requests
- Routing, depending on what we’re doing
- An agreed upon data schema between the FE and BE
I’m going to skip things like a db and authz for the time being, as this isn’t a real product and we don’t have “users”, or other persisted, sensitive data.
I might also skip serving the built HTML from the server, as shadow-cljs does that for me atm. Ultimately, were I to serve this anywhere, I’d like to give Cloudflare Pages, and Cloudflare Pages Functions, a go, to minimize the infrastructure I need to build.
So, after writing that, the siren song of using Pages Functions was strong enough to convince me to give it a go, and I’m glad to say it works!
I’d still like to write out my reasoning behind this choice, as well as the trade-offs for it, which will go in either a later section or another file.
Now that we’ve got a backend that can query giantbomb, we need to process the results and display them.
Once we’ve got search functionality working, and can see/select games we’d like to rent, we’ll have enough data to support building a checkout page.
This section covers some of the things I’ve learned while going over the re-frame tutorial, specifically things that strike me as important, and which I’m unlikely to see discussed over and over again. So the “Dominoes” won’t show up, per se, because they’re discussed or mentioned many times, and thus I’m unlikely to forget them.
View functions should get all of the data they need from a subscription, and not do any further processing on that data.
This allows for easier testing of code, as well as deduplication of work, since re-frame will reuse nodes in the signal graph, thus only doing the computation once, no matter how many views leverage it.
F.e., instead of converting a js/Date.
object into a string in a view function, there should be
a subscription that returns the datetime as a string already. See this doc for more details.
Causing side-effects in your reg-event-db
or reg-event-fx
event handlers takes away some of
the super powers of the effect-driven nature of re-frame, making things harder to debug.
Instead of causing effects in event handlers – such as HTTP calls, event dispatching, or
LocalStorage accessing – use reg-event-fx
to return a description of effects that need to be
dispatched later by the effect
handlers (Domino 3, in re-frame parlance).
That way we isolate the side-effects to specific function definitions later, making them easier to test and debug, and also get stronger debugging abilities because a core part of our event system is the generation and passing of pure data around.
Effect handlers are registered with reg-fx
.
From this doc.
A word of advice - make them as simple as possible, and then simplify them further. You don’t want them containing any fancy logic.
Why? Well, because they are all side-effecty they will be a pain to test rigorously. And the combination of fancy logic and limited testing always ends in tears. If not now, later.
I really dig this advice, and it seems worth meditating on.
Another piece of advice, which I also like:
A second word of advice - when you create an effect handler, you also have to design (and document!) the structure of the value expected.
When you do, realise that you are designing a nano DSL for value and try to make that design simple too. If you resist being terse and smart, and instead, favor slightly verbose and obvious, your future self will thank you. Create as little cognitive overhead as possible for the eventual readers of your effectful code.
I’ve been wondering about the Don’t compute data in a view function learning, and thinking about my time working with Clara Rules, and what I learned about spooky action at a distance in signal-graph systems, wondering how to alert a dev when a change they make to a subscription or event/effect handler breaks an assumption held by some part of the system which they were ignorant of. Schemas or specs seems like one piece of the puzzle there, so I like this call out.
Since we want our event handlers to be pure, we can add a inject-cofx
interceptor to our event
handler – specifically with reg-event-fx
, since reg-event-db
doesn’t pass the coeffects to the
specified function – who’s first argument is an id for a function registered with reg-cofx
,
and that function will assoc
into the coeffects whatever data is relevant.
I sent the following questions to Gravie, and I’ve included the answers in each section, sans the names of specifically who responded since I haven’t asked to include their names, and don’t know how they’d feel about that.
Even though the problem seems open ended enough to obviate any questions, I realized that this could be a UX choice, and thus a question.
I asked:
Can the search and checkout pages be modals? Asked somewhat differently, what kind of functionality is the phrase “The application should consist of at least two unique pages” looking for? Is it looking for how someone would build out navigation and history functionality? Or perhaps something different?
Gravie replied:
Modal or otherwise, that’s up to you. “Unique page” means something different for a server-side rendered web app than it does for a single page app.
I asked:
Is it looking for how someone would build out navigation and history functionality? Or perhaps something different?
Gravie replied:
Yep! You’re understanding this correctly.
I’m envisioning this as a single page app, given we’re using React under the hood. As such, there’s a question for whether we should be able to link to each modal or page of the SPA, or just force users to load up the first page every time.
My personal preference is to support linking into different “pages”, though I don’t know exactly how right now.
I asked:
Should the search and checkout pages/modals/what-have-you be individually linkable?
Gravie replied:
No, we won’t look specifically for that.
Another good question to ask, as a method of finding a balance between getting something built
quickly and accurately showcasing my abilities is to reach out at different points of
functionality to ask “is this enough?”. As for what constitutes enough
, I think a good way to
phrase the question could be “does the current functionality inspire confidence, or are there
things that are missing which you would like to see?”.
I asked:
How much is enough?
When given a somewhat open ended prompt, I can tend to over-polish it, never quite sure if the prompt-as-written is enough to move on to the next stage, or if there’s secretly more being hoped for. I normally work with stakeholders on projects to resolve ambiguities, but in the case of interview-specific projects it’s never immediately clear on who the stakeholders would be, or how much time they’d like to spend hashing out ambiguities.
So, to avoid endlessly working on this project and never actually present it, my current plan was to build specifically what was asked for in the README and then check in with you both to see if that was sufficient to engender confidence in moving to the next phase, or if there were specific things you were hoping to see that I hadn’t covered yet. I’d like to make sure the work I’m doing is giving good signal for the things you’re looking for, and this seemed like the simplest approach to me.
Does that sound like a good approach for you both? I’m also open to other approaches, so I welcome alternatives :)
Gravie replied:
Keep in mind that this is just a sample of your work, it is not expected to be production ready code!
Perhaps my favorite part of the project is the discussion with you about everything else that would have to be done to take it further. One good approach to that is to keep a running list in a README about future work as if it were to be taken all the way to production.
In short, show us your work with the intent to impress us AND to stimulate further discussion.
Sometimes – seemingly on first load of the page – navigation events don’t get suppressed after handling from pushy.
I’m not sure why, though I’ve tried to sort it out for a while.
Next steps would be, I believe, to better understand the debugging tools at my disposal that would allow me to trace the navigation event through its whole life cycle.
I believe that changing the history var to a defonce, in this commit, may have resolved this issue!
Since I never felt certain about the cause of this issue, I can’t know definitively that it’s fixed, but it hasn’t happened yet since making that change, which gives me confidence it’s resolved.