When I was growing up in Korea, I used to create a "오답노트" for school. Translated very literally, this is a "wrong answer notebook"; in practice, a 오답노트 was a journal collecting all the homework or quiz problems that I got wrong.
After school, I iterated on this journal. I re-did all the problems that I got wrong, and if I got it wrong again, I added a new entry to the journal for the problem that I got wrong again. I repeated the process ad infinitum until I had no new entries, i.e. I had correctly answered all my previously incorrectly answered questionsi. Eventually, correcting my mistakes over and over again was an effective learning tool for me, albeit primarily for standardized testing.
Today, more than a decade later, I'm thinking back to the tool. I realize, why's a 오답노트 need to be confined to algebra or English grammar questions? Why can't I re-use the same concept to avoid making the same mistake and learn from my failures?
Last year, I ran an experimental project to leverage a new technology for HHVM extensions. However, after working on the project for a few weeks, we decided to call it quits, as we were finding an increasing number of both technical and alignment problems. At first, I was a bit dejected at "giving up" on this project, having already spent much time and feeling like we had come close. However, my manager was able to step in and frame the result differently for me. I hadn't failed outright by not landing the prototype. Instead, I had also succeeded in proving that this approach wasn't worth its effort; I could disseminate my learnings to the team to give everybody so that others don't repeat the same mistake, and that was value I could derive from the "failure" outcome.
This was an eye-opening experience for me. There was also inherent value in failure; as the Thomas Edison quote goes, "I have not failed. I have just found 10,000 ways that won't work". The fact that I failed wasn't the important part; the important part was that I filled in my learning in my 오답노트, and learned from that experience.
This lesson is even more important nowadays, especially in Software Engineering, uprooted by the recent release of Anthropic's Claude Opus 4.5. The paradigm going forward will be learning how to teach and direct AI models to do what we want to them to doii. As we all try to figure out the best approaches in leveraging LLMs, every failure is actually a valuable experience to make the model better on the next run. If the model made an seemingly innocuous but incorrect change, we need to build better testing so it can self-correct. If the model couldn't figure out what files to edit, we need to give better guidance on where to look. If the model could only make incremental improvements but couldn't fix the problem end-to-end, we need a different framework, such as the Ralph Wiggum technique. Each failure teaches us something about the model's capabilities, and how we can avoid the same mistake next time.
Of course, learning from mistakes is nothing newiii. But what is new to me is the fact that we can actually cherish our mistakes, because they are opportunities for us to learn. As the old Chinese saying goes, 失敗乃成功之母: failure is the mother of success.