Addicted to Scrolling? 3 Small Changes to STOP Feeling Drained After Scrolling Social Media

Summary of Addicted to Scrolling? 3 Small Changes to STOP Feeling Drained After Scrolling Social Media

by iHeartPodcasts

26mDecember 5, 2025

Overview of Addicted to Scrolling? 3 Small Changes to STOP Feeling Drained After Scrolling Social Media (Jay Shetty / iHeartPodcasts)

Jay Shetty examines how social media recommendation algorithms exploit human instincts (comparison, outrage, negativity bias) to maximize engagement, why that leaves users anxious, depleted, and polarized, and — crucially — practical ways both platforms and individuals can reduce the harm. He blends research findings, experiments, and concrete steps you can take today to “starve or steer” the algorithm.

Key takeaways

  • Algorithms are predictive engines optimized for one question: what will keep you on the platform longest? They reward engagement (especially emotional/negative engagement), not truth or wellbeing.
  • The algorithm amplifies existing human tendencies (comparison, outrage, negativity bias). It didn't invent these behaviors, but it monetizes and magnifies them.
  • Small, deliberate changes by platforms (chronological feeds, sharing friction, transparency/audits) and by users (curating feed, adding friction to consumption/sharing habits) can materially reduce harm.
  • You have agency: the feed is adaptive — deliberate choices (who you follow, what you hover on, what you share) recode your recommendations.

How algorithms work (concise)

  • They track micro-behaviors: clicks, likes, shares, watch time, re-watches, hover time.
  • They predict what you’ll engage with next using aggregate behavior patterns.
  • They amplify emotionally engaging posts and adapt instantly to your actions.
  • Reinforcement cycle: you click emotionally hot content → algorithm serves more → you become entrenched and less exposed to alternatives.

How users get trapped

  • Design nudges: autoplay and infinite scroll remove friction and decision-making (autoplay removal reduces session length by ~17 minutes).
  • Social reinforcement loops: outrage performs well — users who post outrage get attention and post more.
  • Unintended steering: neutral searches can lead to extremist/misogynistic recommendations (Mozilla/YouTube Regrets; UCL/Kent study on TikTok showing rapid rise in misogynistic content).
  • Incentive mismatch: platforms profit from engagement; removing toxic content can reduce time-on-site and ad clicks (one study found users spent ~9% less time when toxic posts were hidden).

Notable research & statistics cited

  • 56% of girls feel they can’t live up to beauty standards seen on social media.
  • ~90% of girls follow at least one account that makes them feel less beautiful.
  • False news is ~70% more likely to be retweeted than true stories; true stories take ~6x longer to reach 1,500 people.
  • Twitter’s read-before-retweet experiment increased article openings before sharing by 40%.
  • WhatsApp forwarding limits significantly slowed misinformation spread in India.
  • University of Amsterdam experiment: 500 AI chatbots on a stripped-down network still formed echo chambers and amplified extremes — suggesting human behavior, not just algorithms, drives polarization.
  • UCL/Kent (2024) model: TikTok accounts were shown four times more misogynistic content on For You pages within five days of casual scrolling.

Why humans enable this

  • Negativity bias: evolution prioritizes threats over opportunities.
  • Outrage as social currency: expressing outrage signals group loyalty.
  • Cognitive efficiency: simple negative narratives are easier to process than nuanced content.
  • Habit and low mental energy push people toward passive, attention-grabbing content.

Three platform-level solutions Jay recommends

  1. Chronological feeds by default

    • Not buried in settings; users can toggle to algorithmic if they want.
    • Facebook research: chronological feeds reduce polarization and misinformation (though engagement may fall).
  2. Add sharing friction

    • Read-before-share prompts, cooling-off periods, share limits, or requiring full watch/read before allowing share.
    • Proven examples: Twitter’s read-before-retweet test; WhatsApp forwarding limits.
  3. Algorithmic transparency & independent audits

    • Publish how recommender systems prioritize content; allow external researchers to study impacts.
    • The EU Digital Services Act is moving platforms toward this model.

Five practical steps for users (actionable)

  1. Curate intentionally: follow at least five accounts you normally wouldn’t (diversify your inputs).
  2. Engage deliberately: hover/like/comment on the content you want more of (the algorithm tracks these signals).
  3. Share differently: share five pieces you wouldn’t usually — this helps rewire your For You page.
  4. Morning boundary: don’t look at your phone first thing — avoid letting strangers (and the algorithm) set your mood for the day.
  5. Practice joy presence: celebrate friends’ wins and reduce emotional overreactions to negative content.

Jay demonstrates a simple feed-reset technique: follow quotes or meaningful creators, like and hover over content you want, and share it — within a few interactions your For You page will change.

Actionable checklist (what to do right now)

  • Toggle off autoplay and reduce push notifications.
  • Follow 5 creators who provide helpful, positive, or diverse perspectives.
  • Before sharing: read/watch the full item; ask “Do I understand this?” and “Am I amplifying outrage?”
  • Set a morning rule: no phone for X minutes after waking.
  • Subscribe to newsletters, read books, join offline communities to counterbalance digital echo chambers.

Notable quotes

  • "The algorithm doesn’t just know us — it depends on us. And if we learn how it feeds, we can decide whether to starve it or steer it."
  • "The algorithm's goal is not to make us polarized. It's not to make us happy. It's to make us addicted."
  • "We built a machine to know us. And it became us."

Conclusion

Jay Shetty frames social media harm as an interaction between machine incentives and human tendencies. While platforms must improve design and transparency, individual choices matter and can change the recommendation loop quickly. The episode ends with an empowering question: when you pick up your phone tonight, will you walk back into the same party (comparison and outrage), or will you finally leave? Takeaway: small, intentional changes (both by platforms and users) can reduce drain and restore agency.

If you want to act now: implement one platform-level tweak (notifications/autoplay) and two user practices (follow diversifying accounts; no-phone-first-thing). These three moves can shift your feed and reduce emotional exhaustion within days.