FOMO: LLM embeddings to keep up with AI
An open source newsreader app I built using LLM embeddings to semantically prioritise arXiv and Github updates.
An open source newsreader app I built using LLM embeddings to semantically prioritise arXiv and Github updates.
Winning EF’s Bio x AI hackathon with a multimodal LLM for lab protocol automation.
The past 18 months in AI have been a whirlwind-clusterfuck of acceleration, innovation, and chaos.
We used LLaVA and Claude to build a crazy multimodal LLM health assistant. OpenAI made it redundant the next week.
There is a trifecta of variables that just about everyone building generative AI systems cares about — cost, quality, and latency. As is often the case in computing, there is a trade-off between these properties. And all three are bought with the currency of performance.
Why some sail and others drift.
Making 95% annual return on the hardest data science tournament in the world, for fun and profit.
My MSc thesis and what it aims to do for biomedical science.
This project applied statistical learning techniques to an observational Quantified-Self (QS) study to build a descriptive model of sleep quality. A total of 472 days of my sleep data was collected with an Oura ring. This was combined with a variety of lifestyle, environmental, and psychological data, harvested from multiple sensors and manual logs.
How I almost outsmarted 3 pro forecasters using Bayes rule.