0 - Algorithms to live by - [1 / 13]
As promised earlier this week,
I recently stumbled upon an incredible book called "Algorithms to Live By" while indulging in an enlightening YouTube channel: "Up and Atom." ๐บโจ It's an exploration of how the algorithms we encounter in computer science can actually transform our daily lives for the better. ๐ก๐
As I dived into the book, I was pleasantly surprised to rediscover familiar concepts like searching and caching, and how to apply then amidst the chaos of everyday life. ๐ต๏ธโ๏ธ It's amazing how these algorithms can provide practical solutions to the challenges we face and help us make optimal decisions in various areas of our lives. Ok, ok, usually it is not a paradise :).
Intrigued? Well, let me tempt you further with a sneak peek of the book:
1. Optimal Stopping ๐ฏ - Discover the art of making the right decisions at the right time, whether it's finding your dream job or choosing the perfect apartment.
2. Explore/Exploit ๐งญ - Unleash your sense of adventure while balancing the exploration of new opportunities with maximizing the ones that already exist.
3. Sorting ๐ - Learn how to organize and prioritize your tasks, creating order out of chaos and increasing your productivity.
4. Caching ๐๏ธ - Unlock the power of memory and leverage efficient information retrieval techniques to optimize your learning and decision-making.
5. Scheduling ๐
- Master the art of time management and create effective schedules that maximize your efficiency and reduce stress.
6. Bayes's Rule ๐ - Gain insights into probability and learn how to make more accurate predictions and judgments in your daily life.
7. Overfitting ๐ - Avoid the pitfalls of overanalyzing and making biased decisions by finding the right balance between information and intuition.
8. Relaxation ๐ด - Discover the importance of looking at the problems from a different angle, and understand that good is beyond perfect.
9. Randomness ๐ฒ - Embrace the element of surprise and learn randomness seems to have wisdom.
10. Networking ๐ - Explore the techniques to resolve miscommunication, time to repeat, and more.
11. Game Theory โ๏ธ - Uncover the strategies behind decision-making in competitive situations and learn how to navigate complex scenarios.
12. Conclusion ๐ - Wrap up your enlightening journey with key takeaways and actionable insights to continue optimizing your life using algorithms.
Get ready to unleash the power of algorithms and let them shape your everyday decision-making. Buckle up, and join me on this thrilling adventure! Together, we'll optimize our lives one algorithm at a time. ๐๐ช
Webpage of the book: https://brianchristian.org/algorithms-to-live-by/
Youtube channel "Up and Atom": https://www.youtube.com/@upandatom
#AlgorithmsToLiveBy #OptimizeYourLife #EverydayAlgorithms
1 - Optimal Stopping - [2 / 12]
๐๐ Get ready to dive into the fascinating world of decision-making algorithms! ๐ฏ Let's embark on an exhilarating adventure together, where we'll optimize our lives one algorithm at a time. Join me! ๐ช๐
๐ In the first chapter, we explore Optional Stopping, focusing on the captivating Secretary Problem. ๐ It's all about making choices and accepting or rejecting options based on the total number of choices (N). We can only compare them, without knowing their distribution. ๐ค Curious? Check out the problem on Wikipedia for more details!
๐ก The "basic & optimal" solution suggests rejecting approximately 37% (~1/e) of the candidates, then selecting the next candidate that surpasses all the previously rejected ones. The idea is to gather information and make an informed decision. ๐ง
๐ฉ However, let's dive deeper! ๐ My concern lies in the assumptions underlying this solution, which are often not fulfilled. Here are some common assumptions:
1๏ธโฃ You lack prior information about the candidates. [less]
2๏ธโฃ Revisiting previous candidates is not allowed. [more]
3๏ธโฃ Candidates cannot reject you. [less]
4๏ธโฃ Your objective is to select the absolute best candidate. [less]
5๏ธโฃ There are no costs associated with waiting. [less]
6๏ธโฃ You know the exact number of candidates.
๐ These assumptions impact the number of candidates you should consider before making your decision. If any of these assumptions are violated, you may need to see [less/more] candidates. โ๏ธ
๐ค๐ฐ Moreover, it's important to acknowledge that there is a cost associated with the process of searching for the perfect candidate or option. ๐ฐ๏ธโ๏ธ Sometimes, waiting too long or exhaustively exploring every possibility can lead to missed opportunities or delays in decision-making. As the saying goes, "Perfection is the enemy of good." โก๏ธโณ
๐ก Here's my own strategy: Assuming 10 or 100 candidates, depending on the problem, I typically evaluate 3 candidates (for 10 candidates) or 10 candidates (for 100 candidates) and select the first one better than the best or second-best I've seen. If possible, I revisit the best candidate I've encountered. Of course, fine-tuning is crucial based on the problem's significance. ๐ฏ
๐ก Pro tip: If you can't determine the number of candidates, you can also apply this strategy with time. Simply allocate a specific time frame, convert it to a percentage, and spend that time searching.
๐ Check out the streamlit visualization via the provided link to gain a clearer understanding. [https://agomezh-blog-optimal-stopping-streamlit-app-gyvhe7.streamlit.app]
๐ To illustrate this approach, let me share an example from my recent trip. I wanted to rent a car and assumed I would inquire at for about 30 min. After approaching 3 different companies, the fourth one offered the best rate out of all [100, 100, 125, 90]. ๐ Let's not dwell on the minor car problems I encountered afterward! ๐
๐ Exciting journey ahead! Feel free to connect and share your experiences with algorithm-driven decision-making. Let's optimize our choices and unlock new possibilities together! ๐โจ
#DecisionMaking #Algorithms #Optimization #streamlit #AlgorithmsToLiveBy
2 - Explore / Exploit - [3 / 13]
๐ We continue our journey through "Algorithms to Live By"! Chapter 2 unravels the captivating Explore/Exploit dilemma.
The dilemma reveals the choice between sticking with the familiar or embracing the unknown, where each decision carries the potential for a win or loss. ๐ฒ
But here's the twistโthe value of future experiences is discounted. Today's win trumps tomorrow's thrill. ๐ฅ
๐ฝ๏ธ Picture yourself caught in a delicious dilemma: 1) your trusted go-to restaurant, 2) that place of a few unforgettable experiences, and 3) the tantalizing temptation of trying something entirely new.
๐ข Enter the illuminating Gittins Index, a beacon of decision-making. By evaluating past wins, losses, and the influential discount factor, it empowers you to make informed choices. โจ
๐ No prior information (0 wins, 0 losses), it's like venturing into unexplored territory. However, the Gittins Index offers moreโit allows you to compare different options as well! ๐งญ๐
๐ Yet, a few challenges persist:
1๏ธโฃ Calculating the Gittins Index isn't a walk in the park. ๐งฎ๐ค
2๏ธโฃ The discount factor holds significant influence. ๐๏ธ
๐ก But fear not! I've unlocked a secret to streamline your decision-making process, guaranteeing quick and worry-free choices amidst a sea of options. Introducing my trusty rule of thumb:
๐ Let's categorize problems into two exciting types:
1๏ธโฃ Losses that won't rock your world (low discount factor). In this case, my score is 3*wins - losses. ๐ฒ
2๏ธโฃ Losses that come at a hefty price (high discount factor). Here, my score is 2*wins - losses. ๐ธ
โ๏ธ And behold! If the score hits zero, it's time to embark on an adventure and explore the unknown! Let your curiosity lead the way! ๐๐
๐ฏ Two examples:
1๏ธโฃ [Low discount factor]: Opt for the restaurant with the highest score of 3*wins - losses. ๐ฅ๐ฝ๏ธ
2๏ธโฃ [High discount factor]: Choose a vacation destination with the highest score of 2*wins - losses. ๐๐๏ธ
I'm a forgiving soul when it comes to embracing new experiences. ๐โจ
๐ Did you catch the previous post on Optional Stopping or the Secretary Problem? If not, go and check it out! ๐
#AlgorithmsToLiveBy #DecisionDilemma #ExploreExploit #GittinsIndex #DecisionMaking #Simplification
3 - Navigating Sorting Algorithms - [4/13 Series]ย
๐ In chapter three of "Algorithms to Live By," the spotlight falls on the fascinating world of sorting. We all engage in some form of sorting daily - whether we realize it or not. However, sorting takes on a different character when it comes to computer science, impacting processing speed substantially. ๐
๐ง Essentially, sorting often precedes searching in computing, especially when dealing with large datasets. By organizing the data, we can expedite the search process. To illustrate this, consider my wardrobe: without sorting, locating specific clothing becomes a time-consuming endeavor, albeit resulting in some interesting attire combinations. ๐
๐ก Two noteworthy concepts arise when discussing sorting:
1๏ธโฃ Sorting Metrics: When sorting, comparisons between elements are crucial. The criteria or 'metric' used for comparison can profoundly influence the sorting order. The authors intriguingly use primate social hierarchies as an example - if two individuals have conflicting opinions about who's the alpha, it could lead to conflict. ๐
2๏ธโฃ Errors and Randomness: Errors that occur during sorting can affect the choice of sorting algorithm. In such cases, less complex algorithms can turn out to be more efficient. ๐
๐ Let's look at three prevalent sorting algorithms:
Quick Sort: Efficient, less memory-intensive, but can be time-consuming. Incorporating some randomness can address this (more on randomness in an upcoming chapter). ๐
Merge Sort: Fast and simple, albeit requiring more memory. ๐๏ธ
Bubble Sort: Not the fastest, but it's memory-efficient and particularly resistant to errors. ๐ก๏ธ
๐ For further insights:
Sorting Tips: https://towardsdatascience.com/surprising-sorting-tips-for-data-scientists-9c360776d7e
The Art of Unix Sorting: https://en.wikipedia.org/wiki/Sort_(Unix)
A Deep Dive into Sorting Algorithms: https://en.wikipedia.org/wiki/Sorting_algorithm
๐ฌ Feel free to share your resources, comments, and experiences!
...Did you miss the previous post on 'Explore / Exploit'? Visit the series timeline to catch up! ๐
If you've found this information useful, don't hesitate to hit the 'like' ๐ button and share it across your network! ๐
Engage in the discussion using these tags: #SortingAlgorithms #ComputerScience #AlgorithmsToLiveBy #DataManagement #EfficientCoding #DataScience #ProgrammingInsights
Stay tuned for the next instalment in our series!
4 - Caching [5/13 Series]
๐ Exciting Update! Chapter 4 of the "Algorithms to Live By" series is here! Join Brian and Tom as they unveil the key to lightning-fast programs: Caching! ๐ก๐ป
๐ข What's caching, you ask? It's like a turbo-charged memory booster! ๐ง ๐ช Store important stuff for quick access later. Ready for some programming magic?
๐ฅ Two crucial ideas stood out: memory and organization. Let's dive right in! ๐โโ๏ธ๐ฆ
๐ Quoting the book: "The best guide to the future is a mirror image of the past. Assume history repeats itself - backward!" ๐๐ฎ
๐ LRU Strategy: "Least Recently Used" to the Rescue! ๐๏ธ
๐ซ Imagine needing to remember 100 names, but your memory can only hold 10. No worries! LRU saves the day! ๐ฆธโโ๏ธ Keep the recently used items and bid farewell to the least used one. Memory management made efficient! ๐ฏ๐ก
๐ Organize and Optimize: Layers Are the Key! ๐๏ธโจ
๐ Don't get lost in your cache! Proper organization is key. Here's the secret: layer it up! ๐ฐ๐ฐ๐ฐ
๐ Layers are everywhere, from our thoughts to society's structure (countries, cities, neighborhoodsโoh my!). Embrace the power of three: remember up to three things per layer. That's a total of 9 things in your organized cache! ๐คฏ๐
โญ๏ธ Bonus tip: I do this every day too! I organize my day with 3 important things, and if needed, break them down into 3 tasks each. ๐๏ธโ
๐ Curious about previous posts in this exciting series? Don't miss out! Check them out now! ๐๐
#AlgorithmsToLiveBy #Caching #BoostYourSpeed #LRUStrategy #OrganizationMatters
5 - Scheduling [6/13 Series]
The fifth chapter delves into the intriguing world of scheduling. This topic is not only complex but also incredibly relevant to our daily lives.
The authors begin with toy examples that tackle the challenge of task organization, reminiscent of the famous Knapsack Problem, known for its complexity as an NP-complete problem.
However, the real gem in this chapter is the authors' exploration of choosing the right metric to evaluate the problem. Imagine having a set of tasks with due dates; you might want to minimize the number of tasks that end up being late or reduce the amount of time each task is delayed. The choice of the metric is fundamental!
Interestingly, the authors also make a compelling case for procrastinators, highlighting that resolving small, seemingly irrelevant tasks could be optimal when measured against the wrong metric (e.g., the number of tasks completed rather than focusing on the most important ones).
An intriguing analogy drawn is the concept of burnout. Imagine a juggler expertly juggling multiple balls. Adding another ball might seem like a great idea at first, but if the juggler reaches their limit, all the balls may come crashing down. This beautifully reflects the perils of over-scheduling and burnout in our own lives.
So, what's the solution? The authors propose setting limits. By deciding upfront how much time we will allocate to specific tasks or employing strategies that allow for some free time to handle the unexpected, we can prevent our schedules from collapsing.
If you're intrigued by these fascinating insights and want to explore the previous chapters in this amazing series, be sure to check
Markwhen is an excellent tool to create waterfalls, is like markdown but for scheduling! Google it and check it out!
#AlgorithmsToLiveBy #Scheduling #Productivity #BurnoutPrevention #TaskOrganization #TimeManagement #markwhen
6 - Bayes Rules [7/13 Series]
Welcome to the next chapter in our algorithmic adventure, folks! ๐ Let's dive into the realm of Bayes Rule, an intriguing algorithm that offers a fresh perspective on probability, moving beyond a frequentist approach and incorporating beliefs. ๐ก
Bayes Rule ๐ง empowers us to make reasoning with scant data, permitting our beliefs (prior) to navigate our thought processes. Take, for instance, a game of wins/loss with no strong assumptions. Given n chances, where w are wins, our belief of success should be (w + 1) / (n + 2) ๐ฒ, mirroring the ratio if you play two more rounds, winning one and losing the other. Perfect for those quick ballpark estimates! ๐
Where Bayes Rule truly shines โจ is its versatility in deciphering different types of distributions. Ever found yourself waiting impatiently, wondering how long you still have to wait? Let's talk three types of distributions: Average, Additive, and Multiplicative. ๐ฐ๏ธ
Average distributions shorten with gathered info, think of a movie's length ๐ฅ; Additive ones, like waiting for the perfect poker hand โ ๏ธ, don't change regardless of the wait; Multiplicative scenarios, like life expectancy post-cancer treatment ๐, increase the longer you've waited.
In a nutshell,
Average Distributions: The longer you've waited, the shorter you'll wait next - take the average! โฑ๏ธ
Additive Distributions: Expect to wait about the same amount of time, no matter what. ๐
Multiplicative Distributions: Assume you'll wait longer than you already have (double, perhaps?). โ
This classification and the abstract use of Bayes Rule are phenomenally handy. ๐ฏ
Eager for more algorithmic wisdom? ๐ง Don't forget to check the previous posts in this enlightening series! Remember, a little like and comment can go a long way in sharing the knowledge! ๐๐
#AlgorithmsToLiveBy #BayesRule #DataScience #MarketingWisdom #SchedulingSecrets
7 - Overfitting [8/13 Series]
This is the eigth post of theย "Algorithms to Live By" journey. Today, we'll navigate the maze of overfitting - or in our everyday lives, the art of overthinking.๐ฎ๐ง
Eager to outwit overfitting and overthinking? Allow me to share some easily absorbable insights that can help you escape the specter of overthinking.
Let's dive right into it:
๐ Embrace simplicity (Occam's razor): Picture two paths to climb a mountain, both promising the same breathtaking view from the top. One is a straightforward ascent, while the other is a winding, treacherous trail. Why embark on a grueling journey when simplicity offers the same rewards? In life, too, simplicity often wins. See a blunder for what it is โ a minor slip, not an orchestrated plot to bring you down. Always use a simple benchmark model. ๐๏ธโฐ๏ธ
๐ฅฝ Data isn't reality (Perception's Illusion): Our perception is like a pair of glasses through which we view our world. Sometimes, it can be blurred, tinted, or even cracked. Data is what we feed into our model's perception glasses. Be mindful though! Both you and your models are susceptible to interpretive missteps or deceptive signals due to what you consider true. Stay vigilant! ๐๐ฒ
๐จ Beyond the Hammer & Nail Perspective (Consider Different Metrics): Ever held a hammer and started seeing everything like a nail?? Be it politics, friendships, or relationships - we often tend to scrutinize them through the same lens. But why limit ourselves? Consider amplifying your evaluation arsenal. Let's say for people, energy savers aren't necessarily angels, and those less energy-conscious aren't invariably villains. For models, consider other ways to evaluate the results such as L1, L-infinity, regularization for example! ๐จโ๏ธ๐
If you've found this intriguing or if you've read the book, like, share or leave a comment below! We are now past the half-way mark! ๐๐ผ
#AlgorithmsToLiveBy #Overfitting #SimpleIsBetter #DataVsReality #MetricsMatter
8 - Relaxation [9/13 Series]
As we venture into the eighth chapter of 'Algorithms to Live By', we explore the concept of Relaxation. This strategy is used to deconstruct complex problems, particularly in the realm of optimization. ๐๐
Through Relaxation, we simplify problems by approximating algorithms, setting parameters on our results. An exact solution may not always be necessary, something close enough might suffice - but let's not forget to factor in error analysis. ๐๐ฏ
Next, we encounter Lagrangian Relaxation. This technique alters the constraints of your problem, integrating them into the objective function. The result is a flexibility that encourages meeting the restrictions, while rewarding adherence. ๐ฏโ๏ธ
In the grand scheme of things, when a problem seems monumental, remember to relax. Break it down into manageable chunks, put forth your best efforts, and continually refine your solution. It's about balancing precision with practicality. โ๏ธ๐ฟ
Let's take a moment to consider the knapsack problem, one of my favorites. Imagine you have a set of objects, each with its own size and value. Your goal is to fit these into your backpack, maximizing the value while limited by the backpack's size. Solving this problem exactly is often out of reach, but approximation algorithms tend to perform remarkably well! Curious? Read more on the Knapsack Problem (Wikipedia: https://en.wikipedia.org/wiki/Knapsack_problem) or check out Google's solution in the OR library (https://developers.google.com/optimization/pack/knapsack). ๐๐
In case you missed it, feel free to delve into our previous series for further insights into algorithms and problem-solving. Your likes and shares are appreciated as they help spread the knowledge. Stay informed, stay updated. ๐๐ผ
#AlgorithmsToLiveBy #Relaxation #Optimization #ProblemSolving #LagrangianRelaxation #KnapsackProblem #ContinuousLearning
9 - Randomness [10/13 Series]
Chapter 9 of 'Algorithms to Live By' navigates the riveting realm of Randomness, traversing topics like sampling, Monte Carlo methods, randomized algorithms, and the groundbreaking concept of the three-part trade-off. ๐๐
Let's put the spotlight on this three-part trade-off, a radical idea that flips traditional trade-offs on their head. Typically, in computer systems, the trade-off we wrestle with is size (memory) versus calculation (processing). It's a balancing act between storing but not computing (think databases) or computing but not storing (like recalculating). โ๏ธ๐ป
Enter the three-part trade-off, introducing a fresh component - certainty. It proposes the possibility of dialing down size and calculation demands if you're prepared to forgo complete accuracy. Sounds wild, right? This trade-off isn't mere theory - it's efficiently driving systems like Blockchain, Bitcoin, and Bloom filters. โ๏ธ๐ฐ๐ธ
Speaking of Bloom filters, these nifty tools (among others) serve as checklist officers for identifying bad actors (akin to detecting known malicious items). However, they can sometimes flag good actors as bad, so it's crucial to weigh the consequences. Yet, in many cases, these false positives merely trigger extra controls, making them acceptable. Curious to learn more? Check out Bloom filters on Wikipedia. ๐ต๏ธโโ๏ธ๐
If you've enjoyed this glimpse into the world of algorithms, why not dive a little deeper? You're warmly invited to explore our previous series for more insights. Your likes and shares do more than just brighten my day. Remember, sharing knowledge is better than a social post because we all grow together. ๐ก๐ฑ
#AlgorithmsToLiveBy #Randomness #ThreePartTradeOff #Blockchain #Bitcoin #BloomFilters #ContinuousLearning
10 - Networking [11/13 Series]
In the tenth chapter of "Algorithms to Live By", we set sail on the seas of internet history ๐, all while unraveling the enigmatic concepts of exponential back-off and the dance of additive increase, multiplicative decrease. It's a digital odyssey that illuminates the core of connectivity and strategic patience. Hop on board! โต๐ก
What really gripped me was the concept of "exponential back-off" โ an ingenious approach to never surrender, but also not squander energy when the odds stack against you. ๐ฐ๏ธ
So, what's this mysterious method? ๐งฉ Let's dive in! Exponential back-off suggests that we should attempt again, but only after a waiting period that expands exponentially with each failed try. For instance, you've applied for a job and await a response. You might follow-up after 1 day, then 3 days, then 9 days, and if patience wears thin, after 27 days (1, 3, 9, 27 โ see the pattern?). ๐
This simple yet profound tactic communicates your interest without appearing overly desperate. It starts slow, then rapidly gains momentum. This pattern mirrors the nature of exponential growth, offering enough breathing space for the other party to adapt, address issues, and respond accordingly. It's akin to a gradually lengthening snooze on your alarm clock! โฐ
The book encapsulates this idea with "finite patience and infinite mercy". My personal rule of thumb? Follow-up 4 times for the less vital things (like a casual dinner invite ๐), and up to 6 times for those life-changing opportunities (yes, that multi-million dollar client! ๐ฐ). Of course, life has other strategies to offer, but this rule can serve as a trusty guide. ๐งญ
Equally intriguing is the principle of "additive increase, multiplicative decrease". When things sail smoothly, incrementally add tasks. If a task flops? Halve the workload. A failsafe strategy ensuring our response to failure is as sharp as a hawk's gaze, resilient as a rubber ball! ๐ฏ
As our chapter journey concludes with a sneak peek into latency, I'll let the suspense linger. No spoilers here! ๐๐
Curious about the series so far? Don't hesitate to check out the previous posts. If you're enjoying this ride, show some love with a like, a share, or a comment. Your thoughts are precious! ๐
#Networking #Algorithms #BookReview #AlgorithmsToLiveBy #LinkedInLearning ๐๐ก๐
11 - Game Theory [12/13 Series]
Let's jump into the next part of "Algorithms to Live By", where we talk about game theory.๐งฎ This is where Nash Equilibrium and Trust come into play, where we see that maths can sometimes fall short in optimizing social interactions within games.๐ฏ
The first concept, the "Price of Anarchy", ๐ฐ shows us what happens when everyone tries to optimize for themselves and not as a group. It's like removing all taxes or always betraying in the Prisoner's Dilemma. The game "The Evolution of Trust" (https://ncase.me/trust/) illustrates this concept brilliantly - it's a 30-minute game that's definitely worth your time. ๐น๏ธ
Then we've got the "Tragedy of the Commons" ๐ where small, selfish actions can add up to big problems for society. Not recycling, wasting water, gas, or energy might not seem like a big deal, but when everyone does it, it can lead to significant waste.๐ก
To counteract this, we leverage "Mechanism Design". ๐ ๏ธ This ingenious solution modifies the game, aligning individual choices with the benefit of the group. One way it manifests is through various types of auctions: the Dutch Auction, the English Auction, and the Vickery Auction, showcasing how a cleverly constructed system can direct personal motives towards societal benefits.๐ฐ
In the Vickery Auction is optimal in its design. Everyone secretly bids. The highest bidder wins, but they only pay the second highest bid. It's a balance of bidding just right - too high, and you might pay more than you want; too low, and you might lose out. It's a neat idea, isn't it?๐ค
Another striking example of mechanism design in capitalism is the imposition of taxes or subsidies. By taxing harmful activities (like carbon emissions) and subsidizing beneficial ones (like renewable energy investments), the system motivates individuals and businesses towards actions that are good for society at large, thus aligning personal gains with societal benefits. ๐ฐ๐
The book gives us this final advice: "The road to hell is paved with intractable recursions, bad equilibria, and information cascades. Seek out games where honesty is the dominant strategy. Then just be yourself."๐ค๏ธ
And that's it for this part of the book!๐ We've got one more post in the series coming up. Until then, check out the previous posts in the series and make sure to like, share, and join the discussion!๐๐ข
#GameTheory #AlgorithmsToLiveBy #Auctions #Strategy #BookReview #SharingKnowledge #LinkedInLearning
12 - Conclusion [13/13 Series]
We've reached the end of the series on "Algorithms to Live By". In conclusion, life isn't a linear optimization problem; it doesn't always adhere to the rigid laws of equations and inequalities. We learn that it's not about finding the absolute optimum, but a solution that is "good enough".
The book concludes with the concept of "computational kindness", an elegantly simple notion. The principle calls for creating frameworks that simplify understanding and decision-making. This is not dissimilar to the construction of a well-designed algorithm, where the goal is to reduce computational complexity. Too often, we encounter situations where making a decision feels like solving a NP-hard problem.
Behavioural Economics has shown us that our understanding and assumptions are not always correct, making it okay to deviate from the path of 'perfect optimisation'. Remember, even the greatest mathematicians recognize that approximation and heuristics are essential tools in a world of imperfect information.
To revisit the intriguing exploration of these mathematical concepts and their applications to life, the entire series is available at: www.adao.tech.
If you found this exploration intellectually enriching, I encourage you to review the previous episodes. Likes and shares if you want! Comment below to start a discussion!
#AlgorithmsToLiveBy #ComputationalKindness #BehaviouralEconomics #MathematicalModelling #KnowledgeDiffusion