Rosano / Journal

190 entries under "article"

Friday, February 6, 2026

X : All your talk about reasoning make you seem very anti the AI era.

When an entire culture decides that producing outputs matters more than understanding mechanisms, it works fine right up until the environment shifts and nobody remembers how to reason from first principles.

Wednesday, February 4, 2026

A spoiler for the future - Bitcoin

Austerity measures will have taken the route of unprecedented and radical decimation of the state - everything from state provided healthcare to coastguards to income support to education will be practically gone replaced with numerous forms of bitcoin based insurance. If you can't afford it then you won't be able to gain access to it. There will be no state help as the state can neither fund universal care nor determine whether you deserve support.

Is there a better word for 'hackathon'?

[Common hackathon activities like coding are not a good use of my time for an in-person event. I need quiet focus time and good ergonomics to do programming. Better to use these rare encounters with colleagues to chat, brainstorm, do exploratory design work for instance. I already start hacky prototypes on a whim anyway and don’t need an event do to it.]

WE ALL FEEL THE TRANSITION

I don't think it's the changeover itself that hurts. It's the speed. We all feel this transition. It creates a kind of thin corridor where many so-called shortcuts are currently being taken that are not really shortcuts at all. Outcomes and effects will simply be different. Efficiency is increasingly confused with impact.

i hope more people hear the call to be thoughtful in how they approach these new possibilities. with great speed, many are adopt something on shaky ground, ready to lock themselves in and throw away the key.

Tuesday, February 3, 2026

X : How do you use AI?

every question I ask is turned into a thesis, the counter is created (antithesis). Two agents then take on those roles and the case is argued through several rounds (minimum of three, maximum of ten). A group of 12 agents then vote (with public reasoning) after each round - the first three rounds are merely indicative and there's also a zero round vote on the quality of the thesis / antithesis.

A judging agent then decides at the end of each vote whether the arguments are materially different and if there has been a successful conclusion. Without a successful conclusion then the game continues (again there must be at least 3 rounds). Both the arguing agents have access to the argument, the counters, the voters comments and votes. Each round they present a refined argument. A court recorder summaries the thesis, antithesis, the main arguments presented and which argument eventually wins (if any does).

Decentralized Social Media: What is it, how does it work?

In ActivityPub you get a bit more resilience in that other people's instances might go down, but once they're up again you'll resume synchronizing with them. Your main issue is that once your instance goes down, you personally can't participate anymore unless you make an account somewhere else.

AT protocol is a bit more complicated in that you have several different points of failure. If the firehose goes down none of the app views will see new posts but should have their existing ones. If an app view goes down others will still work and you'd still be able to pull from people's PDSs. If your PDS goes down you can't post but if someone else's goes down you can still see everything else.h

Nostr has the most resilient model in that you can use as many relays as you want and if some of them go down you'd be fine so long as you can find more.

Behind the AI boom, the armies of overseas workers in ‘digital sweatshops’

More than 2 million people in the Philippines perform this type of “crowdwork”, according to informal government estimates, as part of AI’s vast underbelly. While AI is often thought of as human-free machine learning, the technology actually relies on the labour-intensive efforts of a workforce spread across much of the global south and is often subject to exploitation.

Charisse, 23, said she spent four hours on a task that was meant to earn her $2, and Remotasks paid her 30 cents.

Founded in 2016 by young college dropouts and backed by some $600m in venture capital, Scale AI has cast itself as a champion of American efforts in the race for AI supremacy. In addition to working with large technology companies, Scale AI has been awarded hundreds of millions of dollars to label data for the US Department of Defense

Monday, January 26, 2026

Welcome to Gas Town

Stage 1: Zero or Near-Zero AI: maybe code completions, sometimes ask Chat questions

Stage 2: Coding agent in IDE, permissions turned on. A narrow coding agent in a sidebar asks your permission to run tools.

Stage 3: Agent in IDE, YOLO mode: Trust goes up. You turn off permissions, agent gets wider.

Stage 4: In IDE, wide agent: Your agent gradually grows to fill the screen. Code is just for diffs.

Stage 5: CLI, single agent. YOLO. Diffs scroll by. You may or may not look at them.

Stage 6: CLI, multi-agent, YOLO. You regularly use 3 to 5 parallel instances. You are very fast.

Stage 7: 10+ agents, hand-managed. You are starting to push the limits of hand-management.

Stage 8: Building your own orchestrator. You are on the frontier, automating your workflow.

Saturday, January 24, 2026

The Great Entertainment

Reagan proved you could use TV aesthetics in governance. Trump is proving you cannot replace governance with TV.

[the world is not given by parents, but borrowed from children.]

Thursday, January 22, 2026

Why I Left iNaturalist

This post is an announcement for those who were unaware, an explanation for those who are confused, and a record so I don’t forget.

Monday, January 19, 2026

A Social Filesystem

what we make with a tool does not belong to the tool. A manuscript doesn’t stay inside the typewriter, a photo doesn’t stay inside the camera, and a song doesn’t stay in the microphone.

Tuesday, January 13, 2026

What's happening on Jan 13th?

If you don’t have access to a dentist in your trust network, but you trust me, you can “borrow” my connection here.

if someone with resources wants to give you money, you should say no if it’s clear to you it will make your life worse, even if it’s not clear to them. Don’t let their (bad) judgement override your clarity.

Friday, January 9, 2026

LLMs are coherence engines, not truth engines

[LLMs generate coherence more than truth, with] no access to the world, no sensory grounding, no lived experience, and no intrinsic way to check correspondence between its outputs and reality.

[The same is true of humans, as we] construct narratives, causal explanations, identities, and moral frameworks that hang together, rather than ones that are objectively correct. [We tend towards] narrative consistency, social acceptability and reinforce biases based on beliefs.

science works because it builds institutional scaffolding that forces grounding through measurement, replication, falsification, and peer review. Without grounding, both humans and LLMs drift into elegant nonsense.

The risk with LLMs is not that they lie, but that they speak with fluent confidence in domains where humans already confuse coherence with truth.

Tuesday, January 6, 2026

I guess I was wrong about AI persuasion

“The best diplomat in history” wouldn’t just be capable of spinning particularly compelling prose; it would be everywhere all the time, spending years in patient, sensitive, non-transactional relationship-building with everyone at once. It would bump into you in whatever online subcommunity you hang out in. It would get to know people in your circle. It would be the YouTube creator who happens to cater to your exact tastes. And then it would leverage all of that.

We can be convinced of a lot. But it doesn’t happen because of snarky comments on social media or because some stranger whispers the right words in our ears. The formula seems to be:

  1. repeated interactions over time
  2. with a community of people
  3. that we trust

You can try to like stuff

When I encountered spinach as an adult, instead of tasting a vegetable, I tasted a grueling battle of will. Spinach was dangerous—if I liked it, that would teach my parents that they were right to control my diet.

On planes, the captain will often invite you to, “sit back and enjoy the ride”. This is confusing. Enjoy the ride? Enjoy being trapped in a pressurized tube and jostled by all the passengers lining up to relieve themselves because your company decided to cram in a few more seats instead of having an adequate number of toilets? Aren’t flights supposed to be endured?

Confessions to a data lake

visual interfaces of our tools should faithfully represent the way the underlying technology works: if a chat interface shows a private conversation between two people, it should actually be a private conversation between two people, rather than a “group chat” with unknown parties underneath the interface.

We are using LLMs for the kind of unfiltered thinking that we might do in a private journal – except this journal is an API endpoint. An API endpoint to a data lake specifically designed for extracting meaning and context. We are shown a conversational interface with an assistant, but if it were an honest representation, it would be a group chat with all the OpenAI executives and employees, their business partners / service providers, the hackers who will compromise that plaintext data, the future advertisers who will almost certainly emerge, and the lawyers and governments who will subpoena access.

When you work through a problem with an AI assistant, you’re not just revealing information - you’re revealing how you think. Your reasoning patterns. Your uncertainties. The things you’re curious about but don’t know. The gaps in your knowledge. The shape of your mental model.

When advertising comes to AI assistants, they will slowly become oriented around convincing us of something (to buy something, to join something, to identify with something), but they will be armed with total knowledge of your context, your concerns, your hesitations. It will be as if a third party pays your therapist to convince you of something.

Monday, January 5, 2026

A Gentle Introduction To Learning Calculus

Math and poetry are fingers pointing at the moon. Don’t confuse the finger for the moon.

Jackson Kiddard

Anything that annoys you is teaching you patience.

Anyone who abandons you is teaching you how to stand up onyour own two feet.

Anything that angers you is teaching you forgiveness and compassion.

Anything that has power over you is teaching you how to take your power back.

Anything you hate is teaching you unconditional love.

Anything you fear is teaching you the courage to overcome your fear.

Anything you can’t control is teaching you how to let go.

Sunday, January 4, 2026

How do we build the future with AI?

[The bigness and slowness of government] is supposed to create space and resources to account for the communities that a “lean” approach deliberately ignores.

building for yourself on a saturated platform doesn’t shift paradigms if you are already the main character

it’s not like masses of sheeple relish in the experience of catching a cab and couldn’t describe a theoretical better option if they tried. It’s that realizing such a thing requires availability of copious investment capital in the face of non-negligible risk. People who can pursue this kind of thing are either previous-tech-exit-rich or poised-to-convince-venture-capitalists-rich. Their stories are fun to tell and hear, but not practical mogul origin stories for the vast majority of tech workers.

In the nineties, the Dorm Room Garage Dudes had an appreciable head start on relationships and resources to build the commercial web. But by the time the mobile platform came along, those same people had become billionaire tech moguls with cliques that garnered names like ‘The Paypal Mafia.’ This gave them an order of magnitude more opportunity to move first on mobile. Over time, that lead has continued to grow, and with it the time from market creation to market saturation has shortened.

Immutable Infrastructure, Immutable Code

A system becomes legacy when understanding it requires historical knowledge that isn't encoded anywhere except the code itself.

The tragedy is that teams recreate this failure mode faster with AI, because mutation feels cheap while understanding quietly becomes expensive. You can generate a thousand lines in seconds. But the moment you start editing those lines, you've created an artifact that can only be understood historically. You've created brittle legacy code in an afternoon.

If knowledge only exists in the implementation, it's not knowledge. It's risk. Regeneration forces you to make the implicit explicit, or accept that it wasn't essential.

Burn it. Regenerate it. Trust what survives the fire.