# Algos

My favorite activity is developing custom algorithms & models within domains that interest me.

## Showcase

Below are high-level descriptions of particularly noteworthy algorithms that I've developed for Math Academy (in collaboration with Jason Roberts). They are novel and proprietary.

Fractional Implicit Repetition (FIRe)
• Generalizes discrete spaced repetition on independent flashcards to fractional implicit spaced repetition on knowledge graphs of interconnected skills and concepts.
• Estimates student knowledge profiles and selects personalized learning tasks that optimize knowledge persistence over time, striking an optimal balance between learning new topics (maximizing knowledge gain) and reviewing already-seen topics (minimizing knowledge loss).
• Algorithm structure has analogies to biology: topics = cells, spaced repetition latent state = chemical concentrations within cells, knowledge graph = brain, correct answers = stimulants of cell growth, incorrect answers = inhibitors of cell growth, learning tasks = stimuli to brain.
• Speeds up learning by a factor of 4x and improves mastery: students learning via FIRe on Math Academy's personalized learning platform can complete AP Calculus BC in just 35 minutes per school day with improved AP exam scores as compared to an instructor-led course consisting of 12 hours per week (1 hour class and 1 hour homework per school day, plus an amortized 2 hours per week of studying for quizzes/tests and the AP exam itself).
• Has enabled sufficiently motivated 6th grade students to progress from prealgebra to AP Calculus BC over the span of just 2 semesters.

XP Penalty System
• Context: Students are routinely served an handful of learning tasks to choose from, and they earn XP for completing tasks with satisfactory performance. In the absence of a penalty system, adversarial students will complete tasks that they feel are easy and then submit a bunch of random guesses to intentionally fail out of tasks that require more effort.
• Applies a penalty (negative XP) when it detects that a student is failing tasks as a result of being unwilling to put in effort. Tracks the amount of "anger" that would build up in a tutor or guardian sitting next to the student, and then translates that anger into an XP penalty.
• Effectively shuts down adversarial behavior while simultaneously not impacting cooperative students. Many adversarial students' pass rates jumped from 50% to over 90%.

Fast Diagnostic
• Performs inference during diagnostic exams to massively reduce the number of questions that must be asked to characterize a student's knowledge profile.
• Efficiently searches a sequence in which topics are conventionally covered in standard math classes, while simultaneously using a prerequisite graph for causal inference and filtering.

Expression Equivalence
• Determines whether a free response mathematical expression matches the answer key expression.
• Constructs a sample of numerical substitutions such that the free response answer is almost certain to be correct if it matches the answer key on the sample.
• Intelligently handles not only numerical overflow but also details like mathematical ambiguity and context-dependence of mathematical rigor just like a human grader would.

## Tips

Below are some tips for developing valuable models that I've learned from experience.

Focus on solving a problem that fits you.
• A model's value is measured by how well it solves a problem. To solve a problem, you need 1) the ability to solve it and 2) the persistence to stick with it throughout all the difficulties that will surely arise (otherwise the problem would already have been solved).
• The ability to solve a problem is a combination of domain knowledge and technical expertise -- domain knowledge to envision a feasible solution, and technical expertise to make that vision a reality.
• The willingness to persist has a more emotional root -- doing something you love, wanting to fix something that angers you, a drive to conquer an opponent, etc. The more emotional connection you have to a problem, the easier it will be to stay motivated and persistent.

Consider the full problem from the beginning.
• A model is worthless if it does not solve the desired problem. Even if it's elegant and theoretically interesting -- if it doesn't actually solve a problem, then it's worthless.
• It is easier to simplify a convoluted model that solves the problem, than to extend an elegant model that does not solve the problem.
• Models that are intentionally designed to solve solve specific real problems usually turn out to be theoretically interesting and fun to build, but a theoretically interesting and fun-to-develop model designed in a vacuum will rarely happen to solve any sort of real problem.

Both you and your model need to understand the full context surrounding the problem.
• The first step to developing a model is to gather domain knowledge and fully grasp the context in which the model is meant to exist. If you skip this step, then your model might work in theory but probably not in real life.
• In order to gather domain knowledge, you need to engage in hands-on experience. So, avoid domains where you're averse to doing things manually and getting your hands dirty.
• An model can only be as good as the underlying data. If you want your model to do what an expert does, it needs to have all the information that an expert uses during their decision-making process. (Heard this one from Jason Roberts, who heard it from Peter Stone.)
• Once a model is detecting and leveraging most of signal in the data, it's higher ROI to improve the quality and breadth of data than to increase the algorithmic complexity of the model.

Choose the right level for your first principles.
• It is often more efficient to manually encode expert knowledge in a structured data set and build a model on top of that, than to attempt to build a model that does everything from scratch.
• It's easy to rationalize that manually encoding expert knowledge takes too long. But if spending several weeks (or even months) creating a structured data set by hand will allow your model to accomplish important goals that it couldn't otherwise, then it's totally worth doing.
• Plus, when you have to manually encode expert knowledge, it means that you're creating highly relevant data that isn't publicly accessible. This gives you a major edge over any competitor who is not a domain expert or is unwilling to endure tedium for the sake of the model.

• In every domain there are insights that a domain expert would know from experience, that would likely evade a general-purpose algorithm. (This is not surprising, because general-purpose algorithms get their power of generality by reducing assumptions and focusing on the aspects of a problem that are conserved across domains.)
• By introducing domain-specific "shortcut" assumptions and considering relevant aspects of the problem that would otherwise be ignored, you can leverage your domain knowledge as a major advantage against competitors who do not have as much domain expertise.
• Domain expertise is to hard work as algorithmic aptitude is to talent. You can't increase your algorithmic aptitude by very much, but you can vastly increase your domain expertise by leaving the world of abstraction and getting concrete hands-on experience.
• Even after you become a domain expert, don't get cocky thinking you know everything there is to know within the domain. The amount of you know is still orders of magnitude less than the amount you don't, and the distribution of domain expertise among domain experts has a fat tail.

• Routinely step back from the theory and implementation and observe your model's behavior. It needs to make sense intuitively and "feel" right emotionally. (If you've spent enough time building domain knowledge by doing things manually and getting your hands dirty, then you should have emotional reactions to the decisions the model makes.)
• The best machine learning model you have is your brain, and your brain only interfaces with interpretable computer models.
• The more linear and low-dimensional a model is, the easier it is to find good parameters using your intuition alone.
• Emotion is an essential part of the feedback loop for improving a model: 1) inspect the model's output, 2) produce a negative emotional reaction, 3) introspect your emotions to identify the root cause of the negativity, 4) describe what the output needs to look like order to produce a positive emotional reaction, 5) tweak the model to give the desired output, 6) return to step 1.

Make your model robust and reliable.
• Make your model robust to data issues (but make sure it logs a warning whenever it comes across a data issue). Data issues will happen from time to time, especially if the model is being developed in parallel with the underlying data infrastructure. The model can't just fall over and refuse to work whenever data issues happen.
• The more complex your model is, the more internal validation it needs. Depending on the severity and veracity of a failed sanity check, the model should either log a warning or throw an error, halt, and alert you.
• Unit tests are ideal, but if your model exists within a very complex system, then your unit tests won't cover all the possible edge cases no matter how hard you try. So internal and external validation become very important. (Internal validation = validating internal perceptions and decisions of model as it runs on real data in real time; external validation = validating that the model's sequence of output decisions over time on recent real data.)
• To gain confidence in your model and speed up the debugging process, it helps to generate human-readable justifications for why your model makes decisions it does.
• It's often worth investing some time to make your logs highly informative yet easy to skim. (Indents and empty line dividers are your friends.) Tuning and debugging go much faster if you can see the forest for the trees.

Refactor when appropriate.
• One goal of refactoring is to save you time in the long run. On one hand, you shouldn't refactor until you're reasonably confident that what you refactor is a permanent and essential part of the solution. On the other hand, you shouldn't wait so long to refactor that you experiencing lots of friction when trying to extend your solution.
• Another goal of refactoring is to enable other people to understand and modify your code. If you're going to hand off a piece of code to someone, then you should first refactor until it's reasonably clean.

Never stall out. (Corollary: Control the data-generating process.)
• Keep forward momentum. If a model is not producing a desired behavior and you're out of ideas, then temporarily hard-code the desired behavior as an "intervention", move on, and periodically revisit the intervention to try out more elegant ideas.
• If you don't control the data-generating process, then it becomes vastly more difficult (and sometimes impossible) to resolve data issues. You either need to own the data-generating process yourself or have trust, a good relationship, an open line of communication with the person who does.
• Don't embark on a project unless you have some solid ideas on how to approach it. If your desired outcome feels magical, then you probably don't (yet) have enough technical knowledge to achieve it.