Friday, October 17, 2025

NGC1275 CDK24 vs Hubble - AstroBin

https://app.astrobin.com/search?p=eJy7eb0ktaLEVtXcSdXIqCwxpzQVSKsaOwJJv%2FRkQyNzUxDfyBlI5iaWJGeEVBYgVDj6%2BMBl8%2FNyKoNTE4uSMzzzQjJLclKLHfNSXFKLk4syC0oy8%2FOKIbrSEnOKU1XNXdSKS5OyUpNLirFYbQoS8HMHmmoAc4CpC1Y35JXm5IAMK0hMT7U1BFPBmVVApoEBANg3QIE%3D&i=vuzvc0

Wavy-Scope | Handheld Signal Finder to Detect the Sun, the Moon and Even Satellites in Space Using Radio Signals | Portable Handheld Radio Telescope : 12 Steps (with Pictures) - Instructables

https://www.instructables.com/Wavy-Scope-Handheld-Signal-Finder-to-Detect-the-Su/

NYTimes: A C.I.A. Secret Kept for 35 Years Is Found in the Smithsonian’s Vault

https://www.nytimes.com/2025/10/16/science/kryptos-cia-solution-sanborn-auction.html?smid=nytcore-ios-share&referringSource=articleShare

Chromatin loops are an ancestral hallmark of the animal regulatory genome | Nature

https://www.nature.com/articles/s41586-025-08960-w

Loops of DNA Equipped Ancient Life To Become Complex | Quanta Magazine

https://www.quantamagazine.org/loops-of-dna-equipped-ancient-life-to-become-complex-20251008/

Thursday, October 16, 2025

[2510.13129] Numerical Cosmology

https://arxiv.org/abs/2510.13129

Deep heat

https://web.ece.ucsb.edu/~zhengzhang/journals/2025-TCPMT-DeepOHeat-v1.pdf

Three Anti-Inflammatory Supplements Can Really Fight Disease, according to the Strongest Science | Scientific American

Three Anti-Inflammatory Supplements Can Really Fight Disease, according to the Strongest Science | Scientific American

Which Anti-Inflammatory Supplements Actually Work?

Experts say the strongest scientific studies identify three compounds that fight disease and inflammation

Omega 3 Fish Oil capsules in a spoon against a yellow background

Capsules of omega-3 fatty acids show some of the best evidence as anti-inflammatories.

Mensent Photography/Getty Images

Experts say the strongest scientific studies identify three compounds that fight disease and inflammation

Inflammation has two faces. It can be short-lived like the swelling after a twisted ankle or a two-day fever when you get a mild flu, both part of the healing process. Or it can be a longer-lasting and more damaging affliction—chronic, low-grade inflammation that lingers in the body for years without obvious symptoms, silently harming cells. A steady stream of studies has connected this type of chronic inflammation to many serious conditions, including Alzheimer's, heart disease, some cancers, and autoimmune illnesses such as lupus.

These findings have begun to reframe how scientists think about disease and some of its causes. They've also created a booming market for supplements promising to lower chronic inflammation. These pills, capsules and powders are projected to become a $33-billion industry by 2027, offering consumers a sense of control over a complex and confusing ailment. Although thousands of products claim to "support immunity" or "reduce inflammation," most lack solid evidence.

Chronic inflammation is damaging because it involves immune system cells and proteins that typically fight short-term battles against bacteria, viruses, and other pathogens. But when these immune system components stay activated for years, they begin to hurt healthy cells and organs. They are intended to break down invading microbes, but over time their ongoing activity can harm blood vessels, for instance, by damaging normal cells that make up the vessels' inner linings or promoting the growth of plaques. That can lead to clots that interrupt or cut off blood flow, increasing the risk of heart attacks and strokes.


On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing. By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.


We reviewed dozens of studies and spoke with researchers to find out whether any supplements demonstrate anti-inflammatory activity not just in laboratory animals and cultured cells but in human trials. Just three compounds, it turns out, have good evidence of effectiveness: omega-3 fatty acids, curcumin and—in certain ailments—vitamin D.

Graphic describes the similarities and differences between acute and chronic inflammation. Three examples are shown; an acute inflammatory response to a microbial infection, and chronic inflammation in the form of heart disease and rheumatoid arthritis.

What is good evidence? We looked for consistent results across several studies that scientists described as large and well designed. Many of the more convincing trials focus on biomarkers that researchers use to track inflammation in the body. These include C-reactive protein (CRP), a molecule produced by the liver when inflammation is active, and cytokines, which are chemical messengers such as interleukin-6 (IL-6) and tumor necrosis factor alpha (TNF-α), both secreted by immune and fat cells.

Still, interpreting these markers isn't straightforward. "We don't have a universally accepted or standardized measurement," says Frank Hu, chair of the department of nutrition at Harvard University. And inflammation involves hundreds of different types of cells and many signaling pathways, adds Prakash Nagarkatti, director of the National Institutes of Health Center of Research Excellence in Inflammatory and Autoimmune Diseases at the University of South Carolina. This complexity makes it difficult to prove that any supplement works consistently.

The compounds that do show promise will not cure cancer or halt dementia. But they may help quiet the kind of underlying inflammation that has been tied to risks of illness.

OMEGA-3 FATTY ACIDS

Herring at a fish market.

Herring is a rich source of omega-3 fatty acids.

Among the hundreds of supplements tested for their effects on human health, omega-3 fatty acids are supported by some of the most compelling evidence. And scientists understand why they work. Two of the main types of omega-3s are eicosapentaenoic acid and docosahexaenoic acid, better known as EPA and DHA. The body metabolizes them into signaling molecules that block the production of certain cytokines and disrupt the nuclear factor κB pathway, which governs the expression of genes tied to inflammation.

Multiple studies suggest that omega-3 supplements can reduce markers of chronic inflammation, Hu says, especially among people with underlying health conditions. A large, carefully controlled trial called VITAL (officially the Vitamin D and Omega-3 Trial), which followed more than 25,000 adults for about five years, found that omega-3 supplements slightly reduced CRP in people who rarely ate fish—fish is a natural omega-3 source, so these people were getting almost all their omega-3s from the supplements. The omega-3 supplements also were associated with a 40 percent reduction in heart attacks among those consuming the least fish. "The people who benefit the most from these supplements are people who start out with lower intake," says JoAnn Manson, an endocrinologist at Harvard Medical School who co-led the study.

Smaller trials have suggested that omega-3 supplementation can reduce certain markers of inflammation—TNF-α, IL-6, CRP and IL-8—especially in people with conditions such as heart failure, Alzheimer's and kidney disease. One 2012 trial found that small amounts—about 1.25 or 2.5 grams per day—lowered IL-6 levels by 10 or 12 percent, respectively, over four months. A similar group got a placebo instead, and their IL-6 levels increased by 36 percent during that period.

Taking omega-3 fatty acid supplements was associated with a 40 percent reduction in heart attacks among people in a trial who ate the least amount of fish.

But the evidence across various trials is hard to compare. "There is still a question regarding which is the optimal dose and the optimal duration because different studies have used different doses," Hu says. And in healthy people, who have low baseline inflammation, there might be little room for improvement.

VITAMIN D

Egg yolk still life.

Egg yolks contain some vitamin D.

Rigorous trials have debunked the once popular idea that vitamin D is a wonder drug for everything from breast cancer to diabetes. For a few autoimmune conditions, however, the vitamin can be helpful. In the VITAL trial, people who took vitamin D daily for five years had a 22 percent lower risk of developing autoimmune diseases such as rheumatoid arthritis, psoriasis and lupus. "High-dose vitamin D has the effect of tamping down inflammation," Manson says. "So conditions that are really directly related to inflammation may benefit."

Lab studies have suggested that vitamin D may interfere with molecular pathways involved with inflammation, in addition to suppressing the production of proinflammatory cytokines. And in a handful of clinical trials in people with autoimmune conditions, vitamin D supplementation appeared to reduce levels of proinflammatory cytokines such as TNF-α, as well as CRP. In one small study of women with type 2 diabetes, a high dose—50,000 international units (IU) every two weeks—reduced CRP. It also increased levels of IL-10, an anti-inflammatory molecule.

A separate study in women with polycystic ovary syndrome (PCOS) found that a combination of vitamin D and omega-3 fatty acids helped to lower CRP levels. And two analyses that grouped together results from several studies back up the idea that the vitamin can cause a significant, though small, reduction in CRP. Another trial in women with PCOS found that a daily dose of 3,200 IU of the vitamin improved patients' insulin sensitivity and liver function. It didn't affect inflammatory markers, however.

Other studies haven't found consistent effects. The VITAL study reported that people who took vitamin D saw a 19 percent drop in CRP levels by the two-year mark, but this difference disappeared by the fourth year. Whether that two-year dip in inflammation translates into long-term benefits remains unclear, the researchers note. Even then, the findings may also depend on baseline levels. Most people in the VITAL study started with normal levels of vitamin D, Manson says. "People who are already getting reasonable intake may not benefit further from the supplement," she says. A review of other trials looking at inflammation-related biomarkers such as CRP, IL-6 and TNF-α found that vitamin D supplementation at several different doses didn't have a big effect.

As with omega-3s, the varying doses in the different trials may be behind the inconsistent results. Very high weekly doses—40,000 or 50,000 IU—may be necessary. (The recommended daily vitamin D intake for adults is 600 IU.) But high doses carry their own risks, such as too much calcium in the blood.

Although the findings on autoimmune illnesses are intriguing, the American College of Rheumatology still has a conditional recommendation against the use of supplements, instead advocating that people make dietary changes to try to get the recommended vitamins and nutrients from food. Inflammation is central to illnesses such as rheumatoid arthritis, says Arthur M. Mandelin II, a rheumatologist at Northwestern University's Feinberg School of Medicine, but he is interested in vitamin D only as a therapy for patients with demonstrated deficiencies.

CURCUMIN

Still life of turmeric root and ground tumeric in a bowl with a spoon

The spice turmeric contains curcumin.

The pigment that gives turmeric its yellow color, curcumin, is another promising compound for fighting chronic inflammation. The substance seems to interfere with the nuclear factor κB pathway, "the apex of inflammatory cascades in the body," explains Janet Funk, a professor of medicine and nutritional sciences at the University of Arizona, who has evaluated hundreds of human trials on the compound.

Funk's review found that the most convincing evidence for curcumin's anti-inflammatory activity was among small clinical trials. People in those trials had preexisting conditions such as metabolic disorders and osteoarthritis. In a few cases, curcumin's effects resembled those of over-the-counter anti-inflammatory drugs such as ibuprofen. "These small trials—and there are a lot of them—all sort of point to it probably being beneficial," Funk says.

The caveats in Funk's language, however, reflect the ambiguity of other results. A large Canadian trial found no measurable benefit for inflammation in people who were taking curcumin after surgery, and other trials have been inconclusive. One reason for the inconsistency is curcumin's bioavailability: the substance is poorly absorbed in the gut, rapidly metabolized and quickly cleared from the body. Some supplement manufacturers encase curcumin in nanoparticles to improve its absorption, but these formulations aren't always used in clinical trials, nor are they consistently available over the counter.

Some commercial turmeric and curcumin powders have even been found to contain harmful contaminants such as lead. "People buy turmeric powder based on its color," Funk says. "Partly to make it a more beautiful color, [manufacturers] add lead chromate."


Other compounds such as flavanols in green tea and dark chocolate or resveratrol in red wine are often promoted as anti-inflammatory agents. But their supporting evidence is weaker, Hu says. They can be hard for the body to absorb, which limits their effectiveness. In the case of resveratrol, the compound is metabolized and cleared so quickly it's unlikely to have any true impact. And even though a recent trial of cocoa flavanols found a promising effect on cardiovascular health, possibly because of reduced inflammation, any benefit might be outweighed by the many extra calories one would consume if they got the compounds by eating chocolate.

Supplements aren't regulated like drugs. The U.S. Food and Drug Administration doesn't require supplement companies to prove that their products improve health, unlike pharmaceuticals. So there's little financial incentive for these companies to run rigorous clinical trials because, as Funk asks, "What if they find out it doesn't work?"

Such trials would also be difficult to run. Supplement ingredients can vary from batch to batch, especially for botanically derived products, in which concentrations depend on where the plants are grown and how the crucial components are extracted. Even when trials are well designed, they can come up against ethical challenges. "You cannot really preselect people on the basis of being deficient or profoundly deficient in these essential vitamins," Manson says, "because once you identify them as being profoundly deficient, you really should be treating them" and not giving half of them placebos in a multiyear trial.

Still, the appeal of supplements is obvious. We all want simple solutions to complex medical problems, especially as we learn more about the damaging effects of chronic inflammation on health. "The patient who spends a good deal of the visit focusing on diets and supplements is also that patient who's very fearful of medication," says Mandelin, the Northwestern rheumatologist. "They're ready to write [the names] down as if there is some magic answer, and unfortunately there isn't."

Instead experts recommend what good medical studies have shown to work: a healthy and balanced diet. Mediterranean-style diets, which are rich in vegetables and whole grains with some fish and poultry, have especially been shown to reduce chronic disease and to promote good health. Regular physical activity helps, too. "Many people think that they can just take a dietary supplement, pop the pill, and that replaces a healthy diet," Manson says. "That is not at all the case."

[2510.12403] Robot Learning: A Tutorial

https://arxiv.org/abs/2510.12403

Monday, October 6, 2025

The best Prime Day deals from Apple to Yeti: Save up to 75% during Amazon Big Deal Days

https://www.yahoo.com/lifestyle/deals/live/the-best-prime-day-deals-from-apple-to-yeti-save-up-to-75-during-amazon-big-deal-days-190911725.html

The Quest to Build a Truly Intelligent Machine Helps Us Learn about Our Own Intelligence | Scientific American

The Quest to Build a Truly Intelligent Machine Helps Us Learn about Our Own Intelligence | Scientific American

Building Intelligent Machines Helps Us Learn How Our Brain Works

Designing machines to think like humans provides insight into intelligence itself

Design of a blue tech-like background with a brain in the center.

Designing machines to think like humans provides insight into intelligence itself

The dream of artificial intelligence has never been just to make a grandmaster-beating chess engine or a chatbot that tries to break up a marriage. It has been to hold a mirror to our own intelligence, that we might understand ourselves better. Researchers seek not simply artificial intelligence but artificial general intelligence, or AGI—a system with humanlike adaptability and creativity.

Large language models have acquired more problem-solving ability than most researchers expected they ever would. But they still make silly mistakes and lack the capacity for open-ended learning: once they are trained on books, blogs, and other material, their store of knowledge is frozen. They fail what Ben Goertzel of AI company SingularityNET calls the "robot college student test": you can't put them through college (or indeed even nursery school).

The one piece of AGI these systems have unequivocally solved is language. They possess what experts call formal competence: they can parse any sentence you give them, even if it's fragmented or slangy, and respond in what might be termed Wikipedia Standard English. But they fail at other dimensions of thinking—everything that helps us deal with daily life. "We shouldn't expect them to be able to think," says neuroscientist Nancy Kanwisher of the Massachusetts Institute of Technology. "They're language processors." They skillfully manipulate words but have no access to reality other than through the text they have absorbed.

In a way, large language models mimic only the brain's language abilities, without the capacity for perception, memory, navigation, social judgments, and so forth. If, as Kanwisher puts it, our brains are Swiss Army knives with multiple functions, large language models are a really great corkscrew. She and other neuroscientists debate whether these functions are localized in specific places or spread across our gray matter, but most agree that there is at least some specialization. AI developers are incorporating such modularity into their systems in the hope of making them smarter.

No one is sure how brain regions work together to create a coherent self, let alone how a machine could mimic that. One hypothesis is that consciousness is the common ground.

OpenAI, the creator of the generative pre-trained transformer (GPT), lets paid users select add-on tools (originally called "plug-ins") to handle math, Internet search, and other kinds of queries. Each tool calls on some external bank of knowledge pertaining to its specialty. Further, and invisibly to users, the core language system may itself be modular in some sense. OpenAI has kept the specs under wraps, but many AI researchers theorize that GPT consists of as many as 16 separate neural networks, or "experts," that pool their answers to a query—although how they divide their labor is unclear. In December 2023 French AI company Mistral and, soon thereafter, Chinese firm DeepSeek made big splashes by releasing open-source versions of this "mixture of experts" architecture. The main advantage of this simple form of modularity is its computing efficiency: it is easier to train and run 16 smaller networks than a single big one. "Let's get the best of both worlds," says Edoardo Ponti, an AI researcher at the University of Edinburgh. "Let's get a system that has a high number of parameters while retaining the efficiency of a much smaller model."

But modularity comes with trade-offs. No one is sure how brain regions work together to create a coherent self, let alone how a machine could mimic that. "How does information go from the language system to logical reasoning systems or to social reasoning systems?" wonders neuroscientist Anna Ivanova of the Georgia Institute of Technology. "That is still an open question."

One provocative hypothesis is that consciousness is the common ground. According to this idea, known as global workspace theory (GWT), consciousness is to the brain what a staff meeting is to a company: a place where modules can share information and ask for help. GWT is far from the only theory of consciousness out there, but it is of particular interest to AI researchers because it conjectures that consciousness is integral to high-level intelligence. To do simple or rehearsed tasks, the brain can run on autopilot, but novel or complicated ones—those beyond the scope of a single module—require us to be aware of what we're doing.

Goertzel and others have incorporated a workspace into their AI systems. "I think the core ideas of the global workspace model are going to pop up in a lot of different forms," he says. In devising electronic representations of this model, researchers are not seeking to make conscious machines; instead they are merely reproducing the hardware of a particular theory of consciousness to try to achieve a humanlike intelligence.

Could they inadvertently create a sentient being with feelings and motivations? It is conceivable, although even Bernard Baars, inventor of GWT and co-founder of the Society for Mind Brain Sciences, thinks it's improbable. "Conscious computing is a hypothesis without a shred of evidence," he says. But if developers do succeed in building an AGI, they could provide significant insight into the structure and process of intelligence itself.


GWT has long been a case study of how neuroscience and AI research play off each other. The idea goes back to "Pandemonium," an image-recognition system that computer scientist Oliver Selfridge proposed in the 1950s. He pictured the system's modules as demons shrieking for attention in a Miltonian vision of hell. His contemporary Allen Newell preferred the more sedate metaphor of mathematicians solving problems together by gathering around a blackboard. These ideas were taken up by cognitive psychologists. In the 1980s Baars put forward GWT as a theory of human consciousness. "I learned a great deal from AI my whole career, basically because it was the only viable theoretical platform that we had," he says.

Baars inspired computer scientist Stanley Franklin of the University of Memphis to try to build a conscious computer. Whether or not Franklin's machine was truly conscious—Baars and Franklin themselves were dubious—it at least reproduced various quirks of human psychology. For instance, when its attention was drawn from one thing to another, it missed information, so it was just as bad at multitasking as people are. Starting in the 1990s, neuroscientists Stanislas Dehaene and Jean-Pierre Changeux of the Collège de France in Paris worked out what type of neuronal wiring might implement the workspace.

In this scheme, brain modules operate mostly independently, but every tenth of a second or so they have one of their staff meetings. It is a structured shouting contest. Each module has some information to offer, and the more confident it is in that information—the more closely a stimulus matches expectations, for example—the louder it shouts. Once a module prevails, the others quiet down for a moment, and the winner places its information into a set of common variables: the workspace. Other modules may or may not find the information useful; each must judge for itself. "You get this interesting process of cooperation and competition between subagents that each have a little piece of the solution," Baars says.

Not only does the workspace let modules communicate with one another, but it provides a forum where they can collectively mull over information even when it is no longer being presented to the senses. "You can have some elements of reality—maybe a fleeting sensation and it's gone, but in your workspace it continues to reverberate," Dehaene says. This deliberative capacity is essential to solving problems that involve multiple steps or that stretch out over time. Dehaene has conducted psychology experiments in which he gave such problems to people in his laboratory, and he found they had to think them through consciously.

The quest for artificial intelligence teaches us that tasks we find easy are computationally demanding, and the things we find hard, such as chess, are really the easy ones.

If the system sounds anarchist, that's the point. It does away with a boss who delegates tasks among the modules because delegating is tough to get right. In mathematics, delegation—or allocating responsibilities among different actors to achieve optimal performance—falls into the category of so-called NP-hard problems, which can be prohibitively time-consuming to solve. In many approaches, such as the mixture-of-experts architecture thought to be used by OpenAI, a "gating" network doles out tasks, but it has to be trained along with the individual modules, and the training procedure can break down. For one thing, it suffers from what Ponti describes as a "chicken-and-egg problem": because the modules depend on the routing and the routing depends on the modules, training may go around in circles. Even when training succeeds, the routing mechanism is a black box whose workings are opaque.

In 2021 Manuel Blum and Lenore Blum, mathematicians and emeritus professors at Carnegie Mellon University, worked out the details of the battle for attention in the global workspace. They included a mechanism for ensuring that modules do not overstate their confidence in the information they are bringing in, preventing a few blowhards from taking over. The Blums, who are married, also suggested that modules can develop direct interconnections to bypass the workspace altogether. These side links would explain, for example, what happens when we learn to ride a bike or play an instrument. Once the modules collectively figure out which of them need to do what, they take the task offline. "It turns processing that goes through short-term memory into processing that's unconscious," Lenore Blum says.

Conscious attention is a scarce resource. The workspace doesn't have much room in it for information, so the winning module must be very selective in what it conveys to its fellow modules. That sounds like a design flaw. "Why would the brain have such a limit on how many things you can think about at the same time?" asks Yoshua Bengio, an AI researcher at the University of Montreal. But he thinks this constraint is a good thing: it enforces cognitive discipline. Unable to track the world in all its complexity, our brains have to identify the simple rules that underlie it. "This bottleneck forces us to come up with an understanding of how the world works," he says.

For Bengio, that is the crucial lesson of GWT for AI: today's artificial neural networks are too powerful for their own good. They have billions or trillions of parameters, enough to absorb vast swaths of the Internet, but tend to get caught up in the weeds and fail to extract the larger lessons from what they are exposed to. They might do better if their vast stores of knowledge had to pass through a narrow funnel somewhat like how our conscious minds operate.


Bengio's efforts to incorporate a consciousnesslike bottleneck into AI systems began before he started thinking about GWT as such. In the early 2010s, impressed by how our brains can selectively concentrate on one piece of information and temporarily block out everything else, Bengio and his co-workers built an analogous filter into neural networks. For example, when a language model such as GPT encounters a pronoun, it needs to find the antecedent. It does so by highlighting the nearby nouns and graying out the other parts of speech. In effect, it "pays attention" to the key words needed to make sense of the text. The pronoun might also be associated with adjectives, verbs, and so on. Different parts of a network can pay attention to different word relations at the same time.

But Bengio found that this attention mechanism posed a subtle problem. Suppose the network neglected some words completely, which it would do by assigning zero value to the computational variables corresponding to those words. Such an abrupt change would throw a wrench into the standard procedure for training networks. Known as backpropagation, the procedure involves tracing the network's output back to the computations that produced it, so that if the output is wrong, you can figure out why. But you can't trace back through an abrupt change.

So Bengio and others devised a "soft-attention mechanism" whereby the network is selective but not overly so. It assigns numerical weights to the various options, such as which words the pronoun might be related to. Although some words are weighted more highly than others, all remain in play; the network never makes a hard choice. "You get 80 percent of this, 20 percent of that, and because these attention weights are continuous, you can actually do [calculus] and apply backprop," Bengio says. This soft-attention mechanism was the key innovation of the "transformer" architecture—the "T" in GPT.

In recent years Bengio has revisited this approach to create a more stringent bottleneck, which he thinks is important if networks are to achieve something approaching genuine understanding. A true global workspace must make a hard choice—it doesn't have room to keep track of all the options. In 2021 Bengio and his colleagues designed a "generative flow" network, which periodically selects one of the available options with a probability determined by the attention weights. Instead of relying on backpropagation alone, he trains the network to work in either the forward or the reverse direction. That way it can go back to fix any errors even if there is an abrupt change. In various experiments, Bengio has shown that this system develops higher-level representations of input data that parallel those our own brains acquire.


Another challenge of implementing a global workspace is hyperspecialization. Like professors in different university departments, the brain's various modules create mutually unintelligible jargons. The vision area comes up with abstractions that let it process input from the eyes. The auditory module develops representations that are suited to vibrations in the inner ear. So how do they communicate? They must find some kind of lingua franca or what Aristotle called common sense—the original meaning of that term. This need is especially pressing in the "multimodal" networks that tech companies have been introducing, which combine text with images and other forms of data.

In Dehaene and Changeux's version of GWT, the modules are linked by neurons that adjust their synapses to translate incoming data into the local vernacular. "They transform [the inputs] into their own code," Dehaene says. But the details are hazy. In fact, he hopes AI researchers who are trying to solve the analogous problem for artificial neural networks can provide some clues. "The workspace is more an idea; it's barely a theory. We're trying to make it a theory, but it's still vague—and the engineers have this remarkable talent to turn it into a working system," he says.

In 2021 Ryota Kanai, a neuroscientist and founder of Tokyo-based AI company Araya, and another neuroscientist who has crossed over into AI, Rufin VanRullen of the French National Center for Scientific Research, suggested a way for artificial neural networks to perform the translation. They took their inspiration from language-translation systems such as Google Translate. These systems are one of the most impressive achievements of AI so far. They can do their job without being told, for example, that "love" in English means the same thing as "amour" in French. Rather they learn each language in isolation and then, through their mastery, deduce which word plays the same role in French that "love" does in English.

Suppose you train two neural networks on English and French. Each gleans the structure of its respective language, developing an internal representation known as a latent space. Essentially it is a word cloud: a map of all the associations that words have in that language, built by placing similar words near one another and unrelated words farther apart. The cloud has a distinctive shape. In fact, it is the same shape for both languages because, for all their differences, they ultimately refer to the same world. All you need to do is rotate the English and French word clouds until they align. You will find that "love" lines up with "amour." "Without having a dictionary, by looking at the constellation of all the words embedded in the latent spaces for each language, you only have to find the right rotation to align all the dots," Kanai says.

Because the procedure can be applied to whole passages as well as single words, it can handle subtle shades of meaning and words that have no direct counterpart in the other language. A version of this method can translate between unrelated languages such as English and Chinese. It might even work on animal communication.

VanRullen and Kanai have argued that this procedure can translate not just among languages but also among different senses and modes of description. "You could create such a system by training an image-processing system and language-processing system independently, and then actually you can combine them together by aligning their latent spaces," Kanai says. As with language, translation is possible because the systems are basically referring to the same world. This insight is just what Dehaene was hoping for: an example of how AI research may provide insight into the workings of the brain. "Neuroscientists never have thought about this possibility of aligning latent spaces," Kanai says.

To see how these principles are being put into practice, Kanai—working with Arthur Juliani, now at the Institute for Advanced Consciousness Studies, and Shuntaro Sasai of Araya—studied the Perceiver model that Google DeepMind released in 2021. It was designed to fuse text, images, audio, and other data into a single common latent space; in 2022 Google incorporated it into a system that automatically writes descriptions for YouTube Shorts. The Araya team ran a series of experiments to probe Perceiver's workings and found that, though not deliberately designed to be a global workspace, it had the hallmarks of one: independent modules, a process for selecting among them and working memory—the workspace itself.

One particularly interesting implementation of workspacelike ideas is AI People, a Sims-like game created by Prague-based AI company GoodAI. The version I saw in the summer of 2023 was set in a prison yard filled with convicts, corrupt guards and earnest psychiatrists, but the alpha version released in September 2024 included more peaceful scenarios (in April 2025 the game entered a "closed phase," and public users lost access). AI People uses GPT as the characters' brains. It controls not just their dialogue but also their behavior and emotions so that they have some psychological depth; the system tracks whether a character is angry, sad or anxious and selects its actions accordingly. The developers added other modules—including a global workspace in the form of short-term memory—to give the characters a consistent psychology and let them take actions within the game environment. "The goal here is to use the large language model as an engine, because it's quite good, but then build long-term memory and some kind of cognitive architecture around it," says GoodAI founder Marek Rosa.


A potentially groundbreaking advance in AI comes from researcher Yann LeCun of Meta. Although he does not directly cite the global workspace as inspiration, he has come by his own path to many of the same ideas while challenging the present hegemony of generative models—the "G" in GPT. "I'm advocating against a number of things that unfortunately are extremely popular at the moment in the AI/machine-learning community," LeCun says. "I'm telling people: abandon generative models."

Generative neural networks are so named because they generate new text and images based on what they have been exposed to. To do that, they have to be fastidious about detail: they must know how to spell each word in a sentence and place each pixel in an image. But intelligence is, if anything, the selective neglect of detail. So LeCun advocates that researchers go back to the now unfashionable technology of "discriminative" neural networks, such as those used in image recognition, so called because they can perceive differences among inputs—pictures of a dog versus a cat, for example. Such a network does not construct its own image but merely processes an existing image to assign a label.

LeCun developed a special training regimen to make the discriminative network extract the essential features of text, images, and other data. It may not be able to autocomplete a sentence, but it creates abstract representations that, LeCun hopes, are analogous to those in our own heads. For instance, if you feed in a video of a car driving down the road, the representation should capture its make, model, color, position and velocity while omitting bumps in the asphalt surface, ripples on puddles, glints off blades of roadside grass—anything that our brains would neglect as unimportant unless we were specifically watching for it. "All of those irrelevant details are eliminated," he says.

Those streamlined representations are not useful on their own, but they enable a range of cognitive functions that will be essential to AGI. LeCun embeds the discriminative network in a larger system, making it one module of a brainlike architecture that includes key features of GWT, such as a short-term memory and a "configurator" to coordinate the modules and determine the workflow. For instance, the system can plan. "I was very much inspired by very basic things that are known about psychology," LeCun says. Just as the human brain can run thought experiments, imagining how someone would feel in different situations, the configurator will run the discriminative network multiple times, going down a list of hypothetical actions to find the one that will achieve the desired outcome.

LeCun says he generally prefers to avoid drawing conclusions about consciousness, but he offers what he calls a "folk theory" that consciousness is the working of the configurator, which plays roughly the role in his model that the workspace does in Baars's theory.

If researchers succeeded in building a true global workspace into AI systems, would that make them conscious? Dehaene thinks it would, at least if combined with a capacity for self-monitoring. But Baars is skeptical, in part because he is still not entirely convinced by his own theory. "I'm constantly doubting whether GWT is really all that good," he says. To his mind, consciousness is a biological function that is specific to our makeup as living beings. Franklin expressed a similar skepticism when I interviewed him several years ago. (He passed away in 2023.) He argued that the global workspace is evolution's answer to the body's needs. Through consciousness, the brain learns from experience and solves the complex problems of survival quickly. Those capacities, he suggested, aren't relevant to the kinds of problems that AI is typically applied to. "You have to have an autonomous agent with a real mind and a control structure for it," he told me. "That agent has got to have kind of a life—it doesn't mean it can't be a robot, but it's got to have had some sort of development. It's not going to come into the world full-blown."

Anil K. Seth, a neuroscientist at the University of Sussex in England, agrees with these sentiments. "Consciousness is not a matter of being smart," he says. "It's equally a matter of being alive. However smart they are, general-purpose AIs, if they're not alive, are unlikely to be conscious."

Rather than endorsing GWT, Seth subscribes to a theory of consciousness known as predictive processing, by which a conscious being seeks to predict what will happen to it so it can be ready. "Understanding conscious selfhood starts from understanding predictive models of the control of the body," he says. Seth has also studied integrated information theory, which associates consciousness not with the brain's function but with its complex networked structure. By this theory, consciousness is not integral to intelligence but might have arisen for reasons of biological efficiency.

AI is an ideas-rich field at the moment, and engineers have plenty of leads to follow up already without having to import more from neuroscience. "They're killing it," notes neuroscientist Nikolaus Kriegeskorte of Columbia University. But the brain is still an existence proof for generalized intelligence and, for now, the best model that AI researchers have. "The human brain has certain tricks up its sleeve that engineering hasn't conquered yet," Kriegeskorte says.

The quest for AGI over the past several decades has taught us much about our own intelligence. We now realize that tasks we find easy, such as visual recognition, are computationally demanding, and the things we find hard, such as math and chess, are really the easy ones. We also realize that brains need very little inborn knowledge; they learn by experience almost everything they need to know. And now, through the importance of modularity, we are confirming the old wisdom that there isn't any one thing called intelligence. It is a toolbox of abilities—from juggling abstractions to navigating social complexities to being attuned to sights and sounds.

As Goertzel notes, by mixing and matching these diverse skills, our brains can triumph in realms we've never encountered before. We create novel genres of music and solve scientific puzzles that earlier generations couldn't even formulate. We step into the unknown—and one day our artificial cousins may take that step with us.

Is Dark Energy Born inside Black Holes? | Scientific American

Is Dark Energy Born inside Black Holes? | Scientific American

Is Dark Energy Born inside Black Holes?

Visualization of a simulated black hole and its accretion disk

NASA's Goddard Space Flight Center/Jeremy Schnittman

A controversial prediction about black holes and the expansion force of the universe could explain a cosmology mystery

Black holes are eaters of all things, even radiation. But what if their rapacious appetites had an unexpected side effect? A new study published in Physical Review Letters suggests that black holes might spew dark energy—and that they could help explain an intriguing conflict between different measurements of the universe.

Dark energy is the force driving the accelerated expansion of the universe. No one knows what it is, but it's thought to permeate everything. In the theory proposed in the new study, dark energy is also something that arises from dead stars—and therefore didn't exist in the universe until stars were around to begin dying. Although the idea is controversial, it's a prominent example of a newly energized attempt to understand how dark energy works, whether it changes over time and whether our cosmic accounting may be off.

"I view this black hole paper as an interesting entry in this growing canon of people testing out, 'What if I add these physics—does that reconcile these tensions?'" says Jessie Muir, a physicist at the University of Cincinnati.

Cosmic Fudge Factor

The story starts with Albert Einstein, whose theory of general relativity predicted the existence of black holes. At that time, he also thought the universe was static, which didn't mesh with his theory of gravity; in his equations, everything should have clumped together into one big blob. Because it didn't, Einstein came up with a cosmic fudge factor called a "cosmological constant" to describe a persistent and omnipresent force that kept things stable. In 1929, when astronomers discovered that the universe was actually expanding, Einstein dropped the constant.

Fast-forward three quarters of a century. In 1998 astronomers realized that not only was the universe expanding but also that this growth was accelerating—a fact that could be explained by a persistent and omnipresent force. Einstein's constant was back, and cosmologists have been trying to understand it ever since.

This dark energy, which Einstein labeled with the Greek letter lambda, has long been assumed to be unchanging and reliable through time. Even so, cosmologists have learned that dark energy was less influential in the past than it is now. In the beginning, radiation was the most important factor affecting the growth of the baby universe. In the cosmic middle ages, matter dominated that process—both regular matter, which includes stars and whales and us, and dark matter, which we cannot see or describe. Today the universe's growth is most influenced by dark energy. But dark energy's changing role makes it harder to study through time.

"The whole problem with dark energy is that it only became important, like, yesterday," says Zach Weiner, a physicist at the Perimeter Institute in Ontario.

DESI's Surprise

Studying how dark energy has evolved through time is one goal of the Dark Energy Spectroscopic Instrument (DESI), which measures galaxies and sound waves from the early universe. Perched on a mountain called Iolkam Du'ag in Arizona, DESI uses Kitt Peak National Observatory to scrutinize objects that were around when the universe was less than half its current size. Astronomers combine these measurements with other dark energy and matter distribution studies, including the Dark Energy Survey (DES) measurements of distant supernovae and maps of the cosmic microwave background light left over from the dawn of the universe. In 2024 and 2025, results from the DESI experiment showed that galaxies appear to be spread apart less than they should be if dark energy's strength was constant through cosmic time. But if dark energy is changeable, it can't be the cosmological constant. When DESI's results are combined with the other data sets, the picture looks even worse. But lambda, or constant dark energy, is a central paradigm of the standard model of cosmology, which has so far withstood almost any test.

Enter black holes. The new hypothesis about them is one of several novel ideas that theorists have been proposing since this spring, when the latest DESI results were published.

Physicists Kevin Croker of the University of Arizona and Greg Tarlè of the University of Michigan, two of the co-authors of the new black hole hypothesis study, say the DESI results can be interpreted as a signal of matter being converted into dark energy inside of black holes. Put another way, black holes are basically tiny bubbles of dark energy. Einstein would find this notion familiar: energy and mass are equivalent, as he showed (E = mc2), and can be converted into one another. The first stars would have collapsed into supermassive black holes, which somehow created dark energy as they grew.

This scenario could help answer two especially difficult questions, Tarlè notes: Why is dark energy showing itself now, at this epoch in our cosmic history? And why is the density of dark energy so close to the density of regular matter, an oddly neat coincidence? He says these cosmologically coupled black holes could answer both mysteries.

"Why now? Stars had to form, and form black holes, and those black holes had to grow, and everything else had to dilute," he says. "And why is it close to the matter density at the present time? You had to turn the matter into dark energy in black holes, and then it had to grow. The dark energy came from the matter."

The model lines up with recent measurements of the star-formation rate in the early universe, and it also satisfies a strange problem with ghostly particles known as neutrinos.

Neutrinos are chargeless and don't interact with regular matter. They come in three flavors that can oscillate among each other as they travel. Physicists know neutrinos have mass because they've measured the differences between the varying neutrino flavors as the neutrinos flip-flop identities. But no one knows the precise mass of each flavor; we can measure the difference among them but not their individual values. The measurements from DESI and other surveys don't leave a lot of room for massive neutrinos, however, suggesting some cosmic accounting is off.

Croker and Tarlè say the dark-energy-making black holes enable a plump neutrino mass.

Black Hole Skeptics

Still, the idea of black holes converting mass to dark energy remains controversial. Several researchers said they were skeptical of Croker and Tarlè's analysis.

"It is interesting that this can fit the data," says Muir, who wasn't involved in the theory. "It is interesting to take a look at what it does to the neutrino mass [limits]. It is a point in favor of maybe this being not a thing to dismiss." But she and others were hesitant to say more.

The presence of cosmologically coupled black holes is not the only new potential explanation for the DESI results. Another leading idea suggests that dark energy is a sort of fluidlike field, called quintessence, that temporarily "thaws out" in the later universe after being held in check. Another describes an "emergent" dark energy that would have been vanishingly rare or undetectable in early cosmic history but appears in the recent past. And some teams are working on "mirage" dark energy, which tweaks models to reproduce the distance to the cosmic microwave background. Kushal Lodha of the Korea Astronomy and Space Science Institute and several colleagues recently posted a preprint to the server arXiv.org that examined several versions of these theories and found that studying fluid physics models could be a promising path forward.

Meanwhile other teams are looking elsewhere in the darkness for new ideas. Vitor Petri of the Federal University of Espírito Santo in Brazil recently led a study posted to arXiv.org as a preprint that argues that potential interactions between dark energy and dark matter explain the DESI findings better than the standard model of cosmology.

All of the new ideas could be a sign that theorists are more inclined to believe the DESI result and are increasingly looking for ways to interpret what it means, Weiner says.

The idea of black holes as bubbles of dark energy has been debated for a few decades, according to Croker, but many cosmologists dismissed it or questioned the analysis behind it. Croker has been publishing research on such black holes since 2019 with several colleagues. Their key argument is that material traveling near light speed—no matter where it is in the universe—would be tied to the expansion rate of the universe. They argued that black hole masses grew dramatically over cosmic time, in ways that could not be explained by typical black hole growth theories. But this growth makes sense if the black holes somehow bubble up dark energy in their hearts. In a paper Croker, Tarlè and Duncan Farrah, a physicist at the University of Hawaii at Manoa, published in 2023, the researchers calculated that the combined dark energy produced by the growth rate of the first stars in the universe matches the amount of dark energy we see today. But several other scientists criticized that work.

Nevertheless, Croker and his colleagues continued their research, especially when they realized the theory could help explain the new DESI findings. Several members of the DESI collaboration signed on as co-authors for the new black hole paper, which was published in August. Croker says colleagues are increasingly interested this time.

"It's how science works. You've got to build the case," he says.

While Croker and colleagues continue that process, not everyone agrees that the DESI dark energy results are accurate to begin with. Katie Freese, a physicist at the University of Texas at Austin, is one prominent critic of the DESI measurements. She takes issue with some of the decisions in the DESI analysis, including the way it describes two values for dark energy that can explain the way dark energy changes over time. But in the event DESI does show conclusively that dark energy varies over time, she says, Croker's explanation is intriguing.

"I am not convinced that the evidence is there yet," she says. "I am not going to say it's wrong, but it is still tentative. So I am definitely interested to hear what comes from this."

Weiner is skeptical that black holes are dark energy producers. He argues that the model in the new paper is incomplete. Weiner also says it's worth exploring a variety of theories that can explain the apparent DESI findings. He believes more clues may be found by tinkering with a variety of models that describe dark matter and testing whether it interacts with dark energy in some way.

While physicists continue developing new ideas, Freese says she hopes there is a way to demonstrate that dark energy—lambda, Einstein's constant, the bane of theorists' existence—is indeed a changeling. It would open the door to new physics and new interpretations of everything we think we have known since Einstein.

"That," she says, "would be a lot more fun than having it be a constant."