

3·
1 day agoA Web of Trust/friend-of-a-friend system could somewhat work. Where every person has their own personal trust scores of others, including implied trust by navigating the graph. Global trust scores are susceptible to Sybil attacks, but local ones are more resilient (still susceptible though). Hyphanet seems to have a decent WoT implementation, though the user count is so low, it hasn’t really went through a trial by fire yet.

I don’t think LLMs will become AGI, but… planes don’t fly by flapping their wings. We don’t necessarily need to know how animal brains work to achieve AGI, and it doesn’t necessarily have to work anything like animal brains. It’s quite possible if/when AGI is achieved, it will be completely alien.