Thank you President Michael B. Jordan Peterson 🇺🇸😎
Thank you President Michael B. Jordan Peterson 🇺🇸😎
Morpheus drinkin a forty in the death basket.
You’re correct to identify that your position is inconsistent - (A) not wanting the innocent to be wrongly executed and (B) wanting the option to enact retributive punishment against certain offenders.
Let’s analyze these two imperatives:
The benefits of (A) are quite self evident. It’s bad to execute people for no reason. It’s maybe the most brutal and terrifying thing the state can do to a person. And where there exists capital punishment, it happens with non-zero probability.
The benefits of (B) are that you get a nice bellyfeel that you’ve set the universe into karmic alignment. Since there’s no evidence that capital punishment has a deterrent effect on crime (this can be proven by comparison of statistics between states/countries with capital punishment and without), this is really the ONLY benefit of position (B).
So if you want to prioritize what’s best overall for reducing harm in society, then select (A). If you enjoy appointing yourself the moral arbiter of karma by enforcing who “deserves” to live and die (and killing some innocent people is a price worth paying), then select (B).
Simples!
This is orthogonal to the topic at hand. How does the chemistry of biological synapses alone result in a different type of learned model that therefore requires different types of legal treatment?
The overarching (and relevant) similarity between biological and artificial nets is the concept of connectionist distributed representations, and the projection of data onto lower dimensional manifolds. Whether the network achieves its final connectome through backpropagation or a more biologically plausible method is beside the point.
BRRRRR skibidi bop bop bop yes yes
BRRRR skibidi bop bop bop yes yes
Grab him by the bussy
You might want to configure it from scratch, with exactly the tools and utilities you want (e.g. networking utility, desktop environment). Or you might just find this process fun and interesting. Some people take issue with how Canonical is run, and decisions they make.
That is a used method for training other LLMs, I believe OpenAssistant did this.
Money ain’t got no owners. Only spenders.
I’m 2 and I use a smartphone that only executes Fortran through punch cards.
I can sympathize with them, as the way economic thought is portrayed in popular journalism makes it seem like ivory tower eggheads concocting overly-mathematized models to support bad policies. And I do believe there is some truth to this, with bad economists hiding shitty ideas behind the veneer of respectability that math provides. Science and technology are almost fetishized in our culture, especially by those who don’t really study them academically, and I believe disingenuous economists and politicians use this fact to their advantage.
What they must realize is that whatever flaws they might identify to overhaul these bad economic models leads to… more economics! Hopefully better economics. But they’re still participating in the field known as economics.
For instance, noting that the “homo economicus” doesn’t exist IRL isn’t really the gotcha that many people think it is. Rather, anybody doing economics properly is acutely aware of this fact, and is just exploring the limits of what such a simplifying assumption can yield. E.g. a surprisingly large mileage from the very parsimonious axioms of utility given by Von Neumann and Morgenstern. The really interesting and difficult part is thinking about how and why real life data deviates from the predictions made by the simple assumptions.
Economics is an extremely broad field, encompassing things that are closely related to psychology (e.g. behavioral economics), and things that are related to physics and natural science (e.g ecological economics), as well as pure mathematics (e.g. game theory) so trying to say it has more in common with one than the other is kind of a vacuous statement or category error.
To those who are angry at normative claims and policy prescriptions from the economic orthodoxy/zeitgeist, I understand your frustration. I would say what you’re angry at is not economics itself (which is simply the study of scarcity and related human behavior) but economics done badly. Such as the Chicago school.
Setting aside the emotional baggage related to these issues, there are some really beautiful and fascinating topics in economics that borrow very directly from statistical physics in the analysis of financial time series data (and also apply to a wide variety of fields like network traffic, the distribution of metals in ore, turbulent flows in fluid dynamics, and the distribution of galaxies in space), originally identified by Benoit Mandelbrot.
There actually exists an open source community for reverse-engineering EV motors, inverters, battery charging modules, BMS, and everything else necessary to build a DIY car from scrapyard components: https://openinverter.org/wiki/Main_Page
What’s that thing on the left? Looks like it’d be good deep fried, with some sriracha mayo.
In my opinion he should step down as CEO of Linux, and hand the job over to someone more qualified, like Ethan Zusks.
I get that, but what I’m saying is that calling deep learning “just fancy comparison engine” frames the concept in an unnecessarily pessimistic and sneery way. It’s more illuminating to look at the considerable mileage that “just pattern matching” yields, not only for the practical engineering applications, but for the cognitive scientist and theoretician.
Furthermore, what constitutes being “actually creative”? Consider DeepMind’s AlphaGo Zero model:
Mok Jin-seok, who directs the South Korean national Go team, said the Go world has already been imitating the playing styles of previous versions of AlphaGo and creating new ideas from them, and he is hopeful that new ideas will come out from AlphaGo Zero. Mok also added that general trends in the Go world are now being influenced by AlphaGo’s playing style.
Professional Go players and champions concede that the model developed novel styles and strategies that now influence how humans approach the game. If that can’t be considered a true spark of creativity, what can?
To counter the grandiose claims that present-day LLMs are almost AGI, people go too far in the opposite direction. Dismissing it as being only “line of best fit analysis” fails to recognize the power, significance, and difficulty of extracting meaningful insights and capabilities from data.
Aside from the fact that many modern theories in human cognitive science are actually deeply related to statistical analysis and machine learning (such as embodied cognition, Bayesian predictive coding, and connectionism), referring to it as a “line” of best fit is disingenuous because it downplays the important fact that the relationships found in these data are not lines, but rather highly non-linear high-dimensional manifolds. The development of techniques to efficiently discover these relationships in giant datasets is genuinely a HUGE achievement in humanity’s mastery of the sciences, as they’ve allowed us to create programs for things it would be impossible to write out explicitly as a classical program. In particular, our current ability to create classifiers and generators for unstructured data like images would have been unimaginable a couple of decades ago, yet we’ve already begun to take it for granted.
So while it’s important to temper expectations that we are a long way from ever seeing anything resembling AGI as it’s typically conceived of, oversimplifying all neural models as being “just” line fitting blinds you to the true power and generality that such a framework of manifold learning through optimization represents - as it relates to information theory, energy and entropy in the brain, engineering applications, and the nature of knowledge itself.
The real problem is folks who know nothing about it weighing in like they’re the world’s foremost authority. You can arbitrarily shuffle around definitions and call it “Poo Poo Head Intelligence” if you really want, but it won’t stop ignorance and hype reigning supreme.
To me, it’s hard to see what cowtowing to ignorance by “rebranding” this academic field would achieve. Throwing your hands up and saying “fuck it, the average Joe will always just find this term too misleading, we must use another” seems defeatist and even patronizing. Seems like it would instead be better to try to ensure that half-assed science journalism and science “popularizers” actually do their jobs.
I love cars way more than the next guy, but the meme clearly says “urban transportation”.