This last week get very close to a repeat of week four. The reasons were similar (scale of problem becoming clearer and a real challenge), but this time we decided to keep pushing through. The opportunity delivered by solving the engine problems are too great to ignore.
It should be said that it’s not all doom and gloom — the current version of the new engine can already deliver very similar functionality to Twendly in a fraction of the time with significant scalability, so that’s one significant engineering challenge down.
I guess the doubt creeps in because the longer we get from the decision point “we need a new engine” the more we really feel we are flying blind. At the time we made the decision, there was a lot of clarity on why we needed it, but we also thought it would take a couple of weeks. Four weeks in, the decision point is fading and we now start to feel like we are not sure why we are doing it. On the other hand, taking time off to go back and reconnect to re-clarify, while probably a good thing, just pushes the engine out further. In the end it comes down to trusting that the people Alex and I were four weeks ago when we made the decision haven’t fundamentally changed — we could waste a lot of time second guessing ourselves or revisiting, but I think we need to accept that we were right and get the job done. (Of course maybe the Tim and Alex of four weeks ago thought the Tim and Alex now would be a lot smarter than we are, but then I thought they would of left better documentation too.)
One downside of only having two people in your start-up is you’ve only got yourself to blame and it does seem to skirt close to insanity :-)
With that out the way, I do think some breakthroughs happened over the weekend — as Alex succinctly puts it “We’ve been confusing the scalability and the algorithm problems”.
Without trying to get too technical here, the problem is that the scalable solution we are building uses technology that is very fixed in its use cases — the way you design the data structures dictates how you’ll use them. We’ve been trying to build to much of this end use case into the data, instead of dealing with it in a less aggregated format that we can get at really fast, then using live processing power to aggregate further. This might just let us be more flexible (and give us other scaling challenges down the path, but we are happy to have those if we get there!).
The other significant technical milestone has been a lot of work on my part in message queuing systems. While it’s again taken longer than it I would of liked, we’ve actually now got some really solid design and test cases for a very robust and scalable processing pipeline. This lets us handle more complex solutions a lot better, but now requires some rewriting of core modules from complex sets of instructions to simpler tasks.
We keep pushing forwards and I have my fingers crossed that we’ll have made some much more visual progress by next week.
- Message queuing working properly.
- An inkling of light at the end of the engine tunnel.
- Catching up with a few mates on the phone.
- Continuing slow grind of progress.
- A week at home with no external contact gets lonely. Got to keep the social side up too.
Goal this week?
Same as last week — get that first prototype out the door ASAP.