Tuesday, July 2, 2024

via PostgreSQL and UUID as primary key i stumbled upon TSID Generator, which looks interesting:

A Java library for generating Time-Sorted Unique Identifiers (TSID).
In summary:

  • Sorted by generation time;
  • It can be stored as an integer of 64 bits;
  • It can be stored as a string of 13 chars;
  • String format is encoded to Crockford's base32;
  • String format is URL safe, is case insensitive, and has no hyphens;
  • Shorter than UUID, ULID and KSUID.

also, from the comments: https://sqids.org/

Sqids is a small open-source library that can produce short, unique, random-looking IDs from numbers.
The best way to think about it is like a decimal-to-hexadecimal converter, but with a few extra features.

What is it good for?

Link shortening, generating unique event IDs for logging, generating IDs for products/objects on a website (like YouTube does for videos), generating short IDs for text messages, confirmation codes in emails, etc.

Will all this AI investment pay off? by Sergey Alexashenko raises an interesting point:

For investors to make money “because AGI”, a few things have to happen:
7. AGI can’t result in such a drastic change in society that the concept of money as we know it disappears.

of course this isn't a problem specific to AI and AGI - systems try to preserve their own relevance. the question is whether we, as a society, even want this and in what way it may hold back the next iteration of societal systems. what if we stop maximizing for wealth, either due to desire or necessity, and focus on a different value, like happiness and quality of life or minimize the worst effects of the climate catastrophe?

AGI developed by investors will be bred with the expectation of furthering the economic system that enabled it, but - like human children - it's impossible to say whether they'll really adopt their "parental" value system instead of developing their own.

tags: AI AGI money futurism

Thursday, June 6, 2024

The Intellectual Obesity Crisis

A few years back I thought this was a good thing. The constant barrage of information through link aggregators (hn, reddit) and various blogs led to my aquiring some, mostly slightly more than superficial, knowledge about many random topics, which does have its upsides. On the other hand, it somewhat diminishes the time and energy available to do or learn something actually productive. That said, the random information I consumed isn't really the kind of "junk information" of the linked article, it's just info that's only partly - often not yet - relevant to me.

But it has comparable effects:

The vast majority of the online content you consume today won't improve your understanding of the world. In fact, it may just do the opposite; recent research suggests that people browsing social media tend to experience “normative dissociation” in which they become less aware and less able to process information, to such an extent that they often can’t recall what they just read.

(emphasis mine)

This is something i experience often.

As an example, this article titled: "No ones buys books"

read here: https://www.elysian.press/p/no-one-buys-books

It's not junk - to the contrary, it's an interesting piece. It just got very little relevance to me, personally. I don't work in publishing, i'm not an author. It's satisfying to learn some interesting tidbits, but on the other hand i really had trouble concentrating enough to read the whole piece.

maybe related: Why Can’t I Motivate Myself To Work? Spoiler: It's Dopamine Sickness

Tuesday, June 4, 2024


The Moral Economy of the Shire

this is quite surprising. tolkien modelled the shire upon his personal ideal, his eden. and usually, fantasy mostly ignores one key point: subsidience farming usually doesn't yield much surplus, but the depicted hobbits were able to live lives of simple pleasures, mostly farming and then eating what they farmed.

usually this would be hand-waved away with some explanation; hobbits could be so magically gifted at farming they'd create a food surplus from farming so big as to allow for some relatively large allotment of luxury and leisure time.

tolkien isn't the type to hand-wave anything away, though; here's a - sobering - explanation:

Understanding this sheds a great deal of light on Tolkien’s explanation of the Shire’s governance. The lack of much organized administration is not, as some suggest, a form of democratic anarchism, but instead the result of elite control.

read: The Moral Economy of the Shire

update: don't forget the comments, they're great!

Saturday, May 11, 2024

How comfortable are we letting AI find scientific results without having access to the underlying priciples?

There's an article "AlphaFold 3 predicts the structure and interactions of all of life’s molecules" (read here), but imo the gem is the HN discussion about how to handle the problem of "science done by AI".

for example in this thread by moconnor:

What happens when the best methods for computational fluid dynamics, molecular dynamics, nuclear physics are all uninterpretable ML models? Does this decouple progress from our current understanding of the scientific process - moving to better and better models of the world without human-interpretable theories and mathematical models / explanations? Is that even iteratively sustainable in the way that scientific progress has proven to be?

and the answer by dekhn:

If you're a scientist who works in protein folding (or one of those other areas) and strongly believe that science's goal is to produce falsifiable hypotheses, these new approaches will be extremely depressing, especially if you aren't proficient enough with ML to reproduce this work in your own hands.

If you're a scientist who accepts that probabilist models beat interpretable ones (articulated well here: https://norvig.com/chomsky.html), then you'll be quite happy because this is yet another validation of the value of statistical approaches in moving our ability to predict the universe forward.

If you're the sort of person who believes that human brains are capable of understanding the "why" of how things work in all its true detail, you'll find this an interesting challenge- can we actually interpret these models, or are human brains too feeble to understand complex systems without sophisticated models?

If you're the sort of person who likes simple models with as few parameters as possible, you're probably excited because developing more comprehensible or interpretable models that have equivalent predictive ability is a very attractive research subject.

or the question by jprete:

The goal of science has always been to discover underlying principles and not merely to predict the outcome of experiments. I don't see any way to classify an opaque ML model as a scientific artifact since by definition it can't reveal the underlying principles. Maybe one could claim the ML model itself is the scientist and everyone else is just feeding it data. I doubt human scientists would be comfortable with that, but if they aren't trying to explain anything, what are they even doing?

Friday, April 26, 2024

Monday, April 15, 2024

Monday, April 8, 2024