US Surgeon General says social media can pose 'a profound risk' to teens' mental health

2023-05-31 16:22:16

US Surgeon General Vivek Murthy has stated in an advisory that "we cannot conclude social media is sufficiently safe for children and adolescents." Murthy argued that the potential harms of social media outweigh the benefits for younger users.

Citing "a substantial review of the available evidence” on the impact of social media, the advisory says "there are ample indicators" it can "have a profound risk of harm to the mental health and well-being of children and adolescents." It states that, according to Pew Research, as much as 95 percent of US teens aged 13 to 17 use social media while 19 percent said they were on YouTube "almost constantly."

"Children and adolescents who spend more than 3 hours a day on social media face double the risk of mental health problems including experiencing symptoms of depression and anxiety," the advisory reads. "This is concerning as a recent survey showed that teenagers spend an average of 3.5 hours a day on social media."

The advisory calls on tech companies to take "immediate action to mitigate unintended negative effects" of online interactions. It also asks lawmakers to "strengthen protections to ensure greater safety for children and adolescents interacting with all social media platforms."

However, some evidence suggests that social media can be a net benefit for teens. According to a recent Pew Research study, most say they're more connected to their friends through social media. The study indicated that a majority of 13 to 17-year-olds in the US felt that social media provided them with a space to express their creativity, find support and feel more accepted.

Murthy acknowledged that social media can provide benefits to younger users. However, he has been sounding the alarm bell about youth and teen use of such services for some time.

In January, he told CNN that 13 was "too early" for young people to be on social media (companies in that space typically don't allow under 13s to use their services without consent from a parent or guardian). “If parents can band together and say you know, as a group, we’re not going to allow our kids to use social media until 16 or 17 or 18 or whatever age they choose, that’s a much more effective strategy in making sure your kids don’t get exposed to harm early,” Murthy told the broadcaster.

There have certainly been well-documented instances of social media negatively impacting teens' mental health. Still, the advisory is being published at a time when there is a growing appetite among lawmakers for regulating teen use of social media. 

A bill was introduced to the Senate last month that aims to block teens from using social media without parental consent (Utah and Arkansas have both passed statewide legislation on that front). A separate Senate bill called the Kids Online Safety Act (KOSA) aims to force social media companies to add more protections for younger users. The bill was reintroduced after it failed to reach the Senate floor last year.

Critics say such legislation can infringe on the right to privacy and freedom of speech, among other concerns. The Electronic Frontier Foundation, among others, has argued that social media parental consent laws deprive both young people and adults of their First Amendment rights. As for KOSA,

American
Civil Liberties Union senior policy counsel Cody Venzke said the bill's “core approach still threatens the privacy, security and free expression of both minors and adults by deputizing platforms of all stripes to police their users and censor their content under the guise of a ‘duty of care.’” 

This article originally appeared on Engadget at https://www.engadget.com/us-surgeon-general-says-social-media-can-pose-a-profound-risk-to-teens-mental-health-170517411.html?src=rss Взято отсюда

AI presents 'risk of extinction' on par with nuclear war, industry leaders say

2023-05-30 17:07:45

With the rise of ChatGPT, Bard and other large language models (LLMs), we've been hearing warnings from the people involved like Elon Musk about the risks posed by artificial intelligence (AI). Now, a group of high-profile industry leaders has issued a one-sentence statement effectively confirming those fears.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

It was posted to the Center for AI Safety, an organization with the mission "to reduce societal-scale risks from artificial intelligence," according to its website. Signatories are a who's who of the AI industry, including OpenAI chief executive Sam Altman and Google DeepMind head Demis Hassabis. Turing Award-winning researchers Geoffrey Hinton and Yoshua Bengio, considered by many to be the godfathers of modern AI, also put their names to it. 

It's the second such statement over the past few months. In March, Musk, Steve Wozniak and more than 1,000 others called for a six-month pause on AI to allow industry and public to effectively catch up to the technology. "Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control," the letter states.

Though AI is not (likely) self-aware as some have feared, it already presents risks for misuse and harm via deepfakes, automated disinformation and more. The LLMs could also change the way content, art and literature are produced, potentially affecting numerous jobs. 

US
President Joe Biden recently stated that "it remains to be seen" if AI is dangerous, adding "tech companies have a responsibility, in my view, to make sure their products are safe before making them public... AI can help deal with some very difficult challenges like disease and climate change, but it also has to address the potential risks to our society, to our economy, to our national security." In a recent White House meeting, Altman called for regulation of AI due to potential risks. 

With a lot of opinions floating around, the new, brief statement is mean to show a common concern around AI risks, even if the parties don't agree on what those are.

"AI experts, journalists, policymakers, and the public are increasingly discussing a broad spectrum of important and urgent risks from AI," a preamble to the statement reads. "Even so, it can be difficult to voice concerns about some of advanced AI’s most severe risks. The succinct statement below aims to overcome this obstacle and open up discussion. It is also meant to create common knowledge of the growing number of experts and public figures who also take some of advanced AI’s most severe risks seriously."

This article originally appeared on Engadget at https://www.engadget.com/ai-presents-risk-of-extinction-on-par-with-nuclear-war-industry-leaders-say-114025874.html?src=rss Взято отсюда

Former Google CEO says AI poses an 'existential risk' that puts lives in danger

2023-05-26 13:24:19

Add Eric Schmidt to the list of tech luminaries concerned about the dangers of AI. The former

Google
chief tells guests at The Wall Street Journal's CEO Council Summit that AI represents an "existential risk" that could get many people "harmed or killed." He doesn't feel that threat is serious at the moment, but he sees a near future where AI could help find software security flaws or new biology types. It's important to ensure these systems aren't "misused by evil people," the veteran executive says.

Schmidt doesn't have a firm solution for regulating AI, but he believes there won't be an AI-specific regulator in

the US
. He participated in a National Security Commission on AI that reviewed the technology and published a 2021 report determining that the US wasn't ready for the tech's impact.

Schmidt doesn't have direct influence over AI. However, he joins a growing number of well-known moguls who have argued for a careful approach. Current Google CEO Sundar Pichai has cautioned that society needs to adapt to AI, while OpenAI leader Sam Altman has expressed concern that authoritarians might abuse these algorithms. In March, numerous industry leaders and researchers (including Elon Musk and Steve Wozniak) signed an open letter calling on companies to pause AI experiments for six months while they rethought the safety and ethical implications of their work.

There are already multiple ethics issues. Schools are banning OpenAI's ChatGPT over fears of cheating, and there are worries about inaccuracy, misinformation and access to sensitive data. In the long term, critics are concerned about job automation that could leave many people out of work. In that light, Schmidt's comments are more an extension of current warnings than a logical leap. They may be "fiction" today, as the ex-CEO notes, but not necessarily for much longer.

This article originally appeared on Engadget at https://www.engadget.com/former-google-ceo-says-ai-poses-an-existential-risk-that-puts-lives-in-danger-141741870.html?src=rss Взято отсюда

US Surgeon General says social media may be hazardous to teen health

2023-05-23 12:56:51
photo of Vivek Murthy at Conference Of Mayors Held In Washington, DC
U.S. Surgeon General Vivek Murthy in January 2023. | Photo by Drew Angerer/Getty Images

US Surgeon General Dr. Vivek Murthy has issued a new public advisory warning that “there are ample indicators that social media can also have a profound risk of harm to the mental health and well-being of children and adolescents.”

Although the report says social media can provide benefits to younger users and cautions that more research is needed to fully understand its impact, it says

America
needs to “urgently take action to create safe and healthy digital environments that minimize harm and safeguard children’s and adolescents’ mental health and well-being during critical stages of development.”

The report notes that advisories from the Surgeon General like the one issued today represent an attempt to call attention to “an urgent public health issue” and recommend how it could be tackled. Axios notes that Murthy’s recommendations aren’t binding, but that they can shift public debate and provide evidence to lawmakers and regulators to help them to begin addressing an issue.

The advisory comes as attempts to make social media safer for children and teenagers are gathering pace in the US and around the world with legislation such as the UK’s Online Safety Bill. The Surgeon General has previously called youth mental health “the defining public health issue of our time,” according to NBC News. “Adolescents are not just smaller adults,” Murthy told The New York Times in an interview. “They’re in a different phase of development, and they’re in a critical phase of brain development.”

The report says “a highly sensitive period of brain development” happens between the ages of 10 and 19, coinciding with a period when up to 95 percent of 13 to 17 year olds and nearly 40 percent of 8 to 12 year olds are using social media. But the Advisory notes that frequent use of such platforms can impact the brain development, affecting areas associated with emotional learning, impulse control, and social behavior. Murthy has previously said he believes even 13 years old is “too early” for children to be using social media.

The Advisory calls attention to a number of interrelated harms that social media may be contributing to. It calls attention to “extreme, inappropriate, and harmful content” that it says “continues to be easily and widely accessible by children and adolescents,” and also cites studies suggesting a link between high usage of social media and symptoms of depression and anxiety.

However, the advisory also outlines several potential benefits of social media, particularly for marginalized groups. “Studies have shown that social media may support the mental health and well-being of lesbian, gay, bisexual, asexual, transgender, queer, intersex and other youths by enabling peer connection, identity development and management, and social support,” the report says, noting that online communities can also provide support for youths from racial and ethnic minorities.

“Different children and adolescents are affected by social media in different ways, based on their individual strengths and vulnerabilities, and based on cultural, historical, and socio-economic factors,” the report notes. “There is broad agreement among the scientific community that social media has the potential to both benefit and harm children and adolescents.”

The Advisory offers recommendations for policymakers, technology companies, and researchers on how the harms it cites could be addressed going forward. A common thread among them is to fund and enable more research into the impacts of social media usage, and for social media companies themselves to be more transparent in sharing data with outside experts. But there are also recommendations for lawmakers to develop stronger health and safety standards for social media products, and introduce stricter data privacy controls. Technology companies themselves, meanwhile, are urged to assess the risks their products might pose and attempt to minimize them.

Finally, although the report notes that “the onus of mitigating the potential harms of social media should not be placed solely on the shoulders” of either parents and caregivers or children themselves, it also offers some advice on how to foster a healthier relationship with social media by, for example, reporting cyberbullying and online abuse or establishing boundaries between online and offline activities.

“What kids are experiencing today on social media is unlike anything prior generations have had to contend with,” Murthy said in an interview with Axios.

“We’ve got to do what we do in other areas where we have product safety issues, which is to set in place safety standards that parents can rely on, that are actually enforced,” he told the NYT.

Взято отсюда

Духовные потребности

2023-05-23 12:55:47

Господь наделил человека фантазией и правом выбора. Одни люди голосуют за демократов, а другие — за республиканцев. Одни люди болеют за «Спартак» — а другие за «Динамо». Да что там. Одни люди считают, что окрошку надо готовить на квасе. А другие — что на кефире.

Вот и теперь. Еще несколько лет назад вся страна обсуждала возможность перехода на четырехдневную рабочую неделю. Да что там несколько лет назад! Несколько недель назад одна из профсоюзных организаций снова выступила на эту тему. Вот их слова, цитирую, «Сокращение нормальной продолжительности рабочей недели, позволяющей ввести четырехдневную рабочую неделю, будет способствовать повышению производительности труда за счёт высвободившегося свободного рабочего времени, позволяющего сохранить уровень здоровья и удовлетворять духовные потребности работника».

Понимаете? Я уже даже не говорю про то, что высвободившееся свободное время как-то поможет сохранить уровень здоровья. Тут тоже бывают очень разные мнения.

Но духовные потребности работника!

Впрочем, мы живем в мире, где у людей есть право выбора. И вот уже другая организация выступает за, внимание, увеличение рабочей недели до шести дней. Потому что время такое и нам нужен прорыв. Авторы предложения ссылаются на Иран и Непал, где шестидневная рабочая неделя уже вот прямо сейчас. Хотя, честно говоря, я совершенно не понимаю, что в Непале можно делать шесть дней в неделю.

Ну да бог с ним, с Непалом. У всех есть право выбора. Но вот что интересно. Те, которые за четырехдневную неделю всегда уточняют, что она возможна только при сохранении зарплаты. А те, которые за шестидневную про зарплату вообще ничего не говорят.

И ладно там деньги. Мы и без денег способны горы свернуть, если надо. Но как быть с духовными потребностями? Вот что русскому человеку гораздо нужнее, чем деньги. И предлагающие работать дольше ничего про эти потребности не говорят.

Так что судьба инициативы кажется мне печальной. Заранее обреченной на провал.

Однако и на четыре дня никто не будет переходить. Потому что как это — платить человеку столько же за то, что он работает меньше?

Так что останемся при своём.

Взято отсюда

Hitting the Books: How music chords hack your brain to elicit emotion

2023-05-21 17:14:54

Johnny Cash's Hurt hits way different in A Major, as much so as Ring of Fire in G Minor. The dissonance in tone between the chords is, ahem, a minor one: simply the third note lowered to a flat. But that change can fundamentally alter how a song sounds, and what feelings that song conveys. In their new book Every Brain Needs Music: The Neuroscience of Making and Listening to Music, Dr. Larry S Sherman, professor of neuroscience at the Oregon Health and Science University, and Dr. Dennis Plies, a music professor at Warner Pacific University, explore the fascinating interplay between our brains, our instruments, our audiences, and the music they make together. 

White backbround with a red illustrated, stylized head filled with musical items with black and red title lettering above and below.
Columbia University Press

Excerpted from Every Brain Needs Music: The Neuroscience of Making and Listening to Music by Larry S. Sherman and Dennis Plies published by Columbia University Press. Copyright (c) 2023 Columbia University Press. Used by arrangement with the Publisher. All rights reserved.


The Minor Fall and The Major Lift: Sorting Out Minor and Major Chords

Another function within areas of the secondary auditory cortex involves how we perceive different chords. For example, part of the auditory cortex (the superior temporal sulcus) appears to help distinguish major from minor chords.

Remarkably, from there, major and minor chords are processed by different areas of the brain outside the auditory cortex, where they are assigned emotional meaning. For example, in Western music, minor keys are perceived as “serious” or “sad” and major keys are perceived as “bright” or “happy.” This is a remarkable response when you think about it: two or three notes played together for a brief period of time, without any other music, can make us think “that is a sad sound” or “that is a happy sound.” People around the world have this response, although the tones that illicit these emotions differ from one culture to another. In a study of how the brain reacts to consonant chords (notes that sound “good” together, like middle C and the E and G above middle C, as in the opening chord of Billy Joel’s “Piano Man”), subjects were played consonant or dissonant chords (notes that sound “bad”together) in the minor and major keys, and their brains were analyzed using a method called positron emission tomography (PET). This method of measuring brain activity is different from the fMRI studies we discussed earlier. PET scanning, like fMRI, can be used to monitor blood flow in the brain as a measure of brain activity, but it uses tracer molecules that are injected into the subjects’ bloodstreams. Although the approach is different, many of the caveats we mentioned for fMRI studies also apply to PET studies. Nonetheless, these authors reported that minor chords activated an area of the brain involved in reward and emotion processing (the right striatum), while major chords induced significant activity in an area important for integrating and making sense of sensory information from various parts of the brain (the left middle temporal gyrus). These findings suggest the locations of pathways in the brain that contribute to a sense of happiness or sadness in response to certain stimuli, like music.

Don't Worry, Be Happy (or Sad): How Composers Manipulate our Emotions

Although major and minor chords by themselves can elicit “happy” or “sad” emotions, our emotional response to music that combines major and minor chords with certain tempos, lyrics, and melodies is more complex. For example, the emotional link to simple chords can have a significant and dynamic impact on the sentiments in lyrics. In some of his talks on the neuroscience of music, Larry, working with singer, pianist, and songwriter Naomi LaViolette, demonstrates this point using Leonard Cohen’s widely known and beloved song “Hallelujah.” Larry introduces the song as an example of how music can influence the meaning of lyrics, and then he plays an upbeat ragtime, with mostly major chords, while Naomi sings Cohen’s lyrics. The audience laughs, but it also finds that the lyrics have far less emotional impact than when sung to the original slow-paced music with several minor chords.

Songwriters take advantage of this effect all the time to highlight their lyrics’ emotional meaning. A study of guitar tablatures (a form of writing down music for guitar) examined the relationship between major and minor chords paired with lyrics and what is called emotional valence: In psychology, emotions considered to have a negative valence include anger and fear, while emotions with positive valence include joy. The study found that major chords are associated with higher-valence lyrics, which is consistent with previous studies showing that major chords evoke more positive emotional responses than minor chords. Thus, in Western music, pairing sad words or phrases with minor chords, and happy words or phrases with major chords, is an effective way to manipulate an audience’s feelings. Doing the opposite can, at the very least, muddle the meaning of the words but can also bring complexity and beauty to the message in the music.

Manipulative composers appear to have been around for a long time. Music was an important part of ancient Greek culture. Although today we read works such as Homer’s Iliad and Odyssey, these texts were meant to be sung with instrumental accompaniment. Surviving texts from many works include detailed information about the notes, scales, effects, and instruments to be used, and the meter of each piece can be deduced from the poetry (for example, the dactylic hexameter of Homer and other epic poetry). Armand D’Angour, a professor of classics at Oxford University, has recently recreated the sounds of ancient Greek music using original texts, music notation, and replicated instruments such as the aulos, which consists of two double-reed pipes played simultaneously by a single performer. Professor D’Angour has organized concerts based on some of these texts, reviving music that has not been heard for over 2,500 years. His work reveals that the music then, like now, uses major and minor tones and changes in meter to highlight the lyrics’ emotional intent. Simple changes in tones elicited emotional responses in the brains of ancient Greeks just as they do today, indicating that our recognition of the emotional value of these tones has been part of how our brains respond to music deep into antiquity.

This article originally appeared on Engadget at https://www.engadget.com/hitting-the-books-every-brain-needs-music-sherman-piles-columbia-university-press-143039604.html?src=rss Взято отсюда

Apple bans its employees from using ChatGPT and other generative AI platforms for work

2023-05-19 16:04:06

Earlier today, OpenAI released the official ChatGPT app for iPhone. The tool has become well known for answering users’ requests and solving from the simplest to the most complex issues thanks to artificial intelligence.

Apple
, however, doesn’t want its employees using such tools. In an internal memo, the company said that generative AIs can’t be used for work.

more…

The post Apple bans its employees from using ChatGPT and other generative AI platforms for work appeared first on 9to5Mac.

Взято отсюда

Мир без людей

2023-05-18 16:07:15

Футурологи пугают человечество предсказаниями засилия искусственного интеллекта. Да что там футурологи — уже даже политики предлагают вводить специальные налоги на использование роботов на производстве.

Причем тема эта не новая. Уже лет двадцать назад эксперты констатировали, что доля спама в коммуникации по электронной почте превысила долю сообщений живых людей. Да вы и сами это знаете, когда заходите в свой почтовый ящик и видите там десятки сообщений, которые вам не нужны.

И вот теперь выясняется, что почти половина всей активности в интернете приходится на ботов. Ну то есть не в электронной почте и не в массовых рассылках. А именно что в социальных сетях.

Понимаете? С вероятностью в 50 процентов вы не знаете, с кем вы разговариваете в своих социальных сетях. С живым человеком или с роботом.

И нет никаких оснований полагать, что эта тенденция не будет расти. Тем более, что за последние пару лет искусственный интеллект достиг таких высот в развитии, что скоро все забудут про Алана Тьюринга и его тест. Он больше не актуален.

А вот что будет дальше — это очень интересно. И Алан Тьюринг нам в осознании этого будущего уже не поможет. Потому что он умер.

И вполне вероятно, что однажды мы проснемся в мире, где в виртуальном пространстве больше вообще не будет людей. А будут одни роботы.

И как только люди осознают это и поймут, что с роботами не так интересно — они выйдут из виртуальности и вернутся в реальность. Выйдут на улицу, познакомятся с девушкой. Ну или девушка познакомится с парнем. И они в знак своей любви повесят свои телефоны на каком-нибудь мосту вместо замка.

И мир вернется на круги своя. А весь этот морок развеется. Люди станут разговаривать голосом, а не через текстовые сообщения. И ночью они станут быть вместе, а не с экраном. Где вроде как всё то же самое, но всё же не совсем то же самое.

Интересно, хоть кто-нибудь из вас, слушающих сейчас эту реплику, в это поверил?

Разумеется, всё останется так, как оно есть. Разве что людям придется прорываться через толпы галдящих ботов, чтобы докричаться друг до друга.

А интернет-компаниям придется увеличивать и увеличивать свои мощности. С тем, чтобы всем хватило места в виртуальном пространстве.

И еще наверняка появится профессия юриста, защищающего права ботов.

А потом и сами эти юристы станут ботами.

Взято отсюда

Новости

2023-05-17 19:02:17
Орден Святого Памперса.
Носится в качестве маски на рту.©
Взято отсюда

Downturn looms in Faroese population development

2023-05-16 13:04:52

With a rising mortality rate combining with slowing gain in life expectancy and a dramatic drop in fertility, the population of the Faroe Islands could be headed for decline after decades of growth. 

At the beginning of this century, the average fertility rate for Faroese women of childbearing age was 2.5, however for the past couple of years that rate has dropped significantly, according to Statistics Faroe Islands. Whereas in 2019 the fertility rate was 2.4, by 2020 and 2021 it had dropped to 2.3, only to plunge to 2.05 in 2022—well below the 2.1 mark, at which the generation of women of childbearing age can no longer replace itself. 

In other words, a recipe for overall population decline, notwithstanding the fact that the fertility rate in the island nation remains the highest in Europe, as per Eurostat.

The total population of the Faroe Islands had reached 54,362 at the beginning of April this year, a 1.1-percent increase year-on-year, yet again a lower increase compared to the previous year.

The past year’s population growth is due to a natural increase of 187 and a net migration of 382, according to Statistics Faroe Islands, with fewer people having deceased during this period compared to the previous 12-month period. 

The death rate, meanwhile, which for the past many years has been fairly steady, was unusually high in the first half of 2022, we’re told, while at the same time the birth rate was also dramatically declining.

The net migration during these past 12 months had reportedly dropped year-on-year, with growing numbers of people leaving the country.

In related news, according to the latest figures from Statistics Faroe Islands, the average life expectancy for Faroese women is 85.4 years, compared to 81.3 for men.

Since the mid 1980s, the average life expectancy has increased by 6.6 years for women and 9.4 years for men. Back then the difference between women and men in this regards was seven years, which means that gap has been closing in recent years.

Compared to other countries in Europe, the average life expectancy in the Faroe Islands is on the higher end, with San Marino topping the life expactancy list at 86.5 years, followed by Lichtenstein, Spain, Switzerland, France and the Faroe Islands. For men, Lichtenstein is reporting the highest life expectancy at 82.5 years, followed by Iceland, Switzerland, Norway and the Faroes.

The post Downturn looms in Faroese population development appeared first on Local.fo.

Взято отсюда

Почему США МОГУТ увеличивать госдолг без больших последствий?

2023-05-14 02:43:27
Есть такое существо в рунете под названием ALEXANDR_ROGERS, любящий учить мир жить. Особенно он не любит Америку, в которой он, похоже, никогда не был и знания о коей он подчерпывает то ли из ленты.ру, то ли из бредней Хазина.

Вот его последний петушачий крик: ... российские либералы-западники, и украинские русофобы годами назойливо повторяли «У одной Калифорнии ВВП больше, чем у всей Российской Федерации». Типа «Ну против кого вы вообще рыпаетесь?». Мы рыпаемся против страны, треть которой в руинах (как Детройт), вторая треть в бездомных (как та же Калифорния), а оставшаяся часть в наркоманах. И вся целиком она погрязла в перестрелках и массшутингах. Впрочем, в долгах она погрязла ещё больше…

Это чудо даже не рефлексирует, что пользуется американским интернетом,американскими компьютерами, американскими мобильными телефонами, американской навигацией, американскими поисковыми системами, американскими языками программирования, американскими автомобилями, американскими самолетами, смотрит американское кино, не сам, так сосед, носит американские джинсы и американские сникерсы - пусть и китайского производства, но американские, говорит на смеси американского с нижегородским: Байтить, Буллинг, Вайб, Войсить, Задонатить, Кринж, Изи, Краш, Криповый - это молодежный язык...

Собственно, 90% того, что окружает это чудо и вообще любого человека в мире сегодня, за исключение "вышиванок" на Украине... и даже не знаю чего - не могу сообразить, - в России. Ах, да... "Гиперзвук". Который правда, не гиперзвук в общепринятом понимании, но все же.

Но это бог с ним. Но в России тов. Роджерса, как всех малограмотных российских экономистов ничего не понимающих с том, как работает американская экономика крайне радует перспектива американского дефолта и, особенно, увеличение порога федерального долга.

Начну с возврата в 2011 год. Цитата: В минувшее воскресенье 24 июля Белый Дом и Конгресс США так и не сумели договориться по вопросам внешнего долга и дефицита бюджета. По сей день остается неясным план, согласно которому власти США собираются преодолеть финансовые барьеры... есть вероятность возникновения «технического дефолта», чего так не хочет президент Барак Обама.

Нет, пожалуй, в начало 60-х годов. Американский экономист Роберт Триффин указал на (кажущееся) противоречие, которое возникает, если для международных расчётов и национальных валютных резервов используется валюта только одного государства. Он писал: Для того, чтобы обеспечить центральные банки других стран необходимым количеством долларов для формирования национальных валютных резервов, необходимо, чтобы в США постоянно наблюдался дефицит платёжного баланса. Но дефицит платёжного баланса подрывает доверие к доллару и снижает его ценность в качестве резервного актива, поэтому для укрепления доверия требуется профицит платёжного баланса.

18 июля 2011 года... Полмесяца до кризиса Что произойдет, если США объявят о техническом дефолте?

Так что, можно сказать Хазин и Роджерс "проснулись". Так вот, почему указанное Триффином противоречие кажущееся, а Роджерс и иже с ним несут голимый бред без какого-либо понимания экономики и финансов?

Посмотрим на первую картинку:


Это процент ВВП, который США должны тратить на обслуживание национального долга, причем с прогнозом до 2033 года. Сегодя , в 2023 году это 1.9%, а в физическом выражении это 475 миллиардов долларов.

Первый вопрос: что такое "национальный долг"? - Этот долг состоит из двух главных частей - частных долгов публики и собственно государственного долга. Собственно государственный долг возникает для покрытия разницы между доходами и расходами.

Механизм "взятия в долг" - это продажа государством тежуриз разного вида, облигаций, некоторых видов секьюритиз - например, инфляционно защищенных бумаг, и т.д.  Все эти инструменты имеют срочный характер. То есть, продав сегодя, выплаты по ним государство должно делать спустя некоторое время. Но держатели этих бумаг могут получить прибыль быстрее, продав их кому-то еще по рыночной на момент продажи цене.

Сегодня США платят, например, (bond yield) 3.463% годовых по 10-летним облигациям, и 3.783% по 30-летним. Много это или мало?

Уже все, кажется, за рыночные времена, что деньги со временем дешевеют в силу инфляции. Поскольку трежуриз продаются "в мир", то bond yield нужно сравнивать с мировой инфляцией.
Сегодня мировая инфляция составляет... 8.75% (!). Что это значит? - Это значит, что когда придет время покрывать долг, США купят нужную сумму, более дешевыми деньгами, чем они брали в долг ! То есть, в реальности США на долге зарабатывают чисто финансово и ничем страшным для них увеличение долга не грозит.

Единственное, что ограничивает США в смысле увеличения долга: это ограниченная возможность переварить взятые в долг деньги.

Это как если бы заняли деньги, которые не в состоянии потратить (!).

Но и это еще не все. Есть еще одно ключевое непонимания людей типа Роджерса и Хазина. Они просто не понимают. почему расходы могут превышать доходы? - Хотя любому нормальному экономисту понятно, что расходы - это не не просто расходы, системы "пропил". Это ИНВЕСТИЦИИ, которые приносят деньги, в том числе, и государству, хотя и косвенно, через налоги, через рабочие места и т.д.

С чем связана сегодняшняя инфляции в США? - Исключительно с выходом из ковидной ситуации. За два ковидные года Правительство США сделало колоссальные выплаты не только людям для поддержки штанов, но и вставшим компаниям. При этом производство не увеличилось и даже сократилось. Отсюда - инфляция. Но выход из инфляции (а он осуществляется успешно - уже в а преле годовая инфляция упала ниже 5% и, к концу года скорее всего вернется к многолетнему уровню.

А что же с бюджетными спорами и мрачными прогнозами некоторых экономистов?
Тут все просто: это простая более или менее имитационная предвыборная борьба между республиканцами и демократами. В хоте этой борьбы республиканцы стремятся сократить или закрыть популистские демократические программы, которые те могут ис пользовать как козыри.

Ничего страшного. Я не случайно отправил вас сперва в 2011 год. Это - стандартная ситуация. Тогда даже правительство несколько месяцев работало бесплатно, так как бюджет не был утвержден. И мир не рухнуд. И США тоже.

Ну а насчет "ржавого пояса", наркомании  и стрельбы - как-нибудь потом. Там тоже  нет проблем. Кстати об убийствах. Количество убийств на 100000 населения в США - 5.3, в России 9.2.... Но это к слову. Взято отсюда

The AI takeover of Google Search starts now

2023-05-11 16:34:25
A screenshot of Google’s SGE generating an AI snapshot
Google’s still careful to say SGE is an experiment — but it’s a front-and-center one now. | Image: Google

Google is moving slowly and carefully to make AI happen. Maybe too slowly and too carefully for some people. But if you opt in, a whole new search experience awaits.

The future of Google Search is AI. But not in the way you think. The company synonymous with web search isn’t all in on chatbots (even though it’s building one, called Bard), and it’s not redesigning its homepage to look more like a ChatGPT-style messaging system. Instead, Google is putting AI front and center in the most valuable real estate on the internet: its existing search results.

To demonstrate, Liz Reid, Google’s VP of Search, flips open her laptop and starts typing into the Google search box. “Why is sourdough bread still so popular?” she writes and hits enter. Google’s normal search results load almost immediately. Above them, a rectangular orange section pulses and glows and shows the phrase “Generative AI is experimental.” A few seconds later, the glowing is replaced by an AI-generated summary: a few paragraphs detailing how good sourdough tastes, the upsides of its prebiotic abilities, and more. To the right, there are three links to sites with information that Reid says “corroborates” what’s in the summary.

Google calls this the “AI snapshot.” All of it is by Google’s large language models, all of it sourced from the open web. Reid then mouses up to the top right of the box and clicks an icon Google’s designers call “the bear claw,” which looks like a hamburger menu with a vertical line to the left. The bear claw opens a new view: the AI snapshot is now split sentence by sentence, with links underneath to the sources of the information for that specific sentence. This, Reid points out again, is corroboration. And she says it’s key to the way Google’s AI implementation is different. “We want [the LLM], when it says something, to tell us as part of its goal: what are some sources to read more about that?”

A few seconds later, Reid clicks back and starts another search. This time, she searches for the best Bluetooth speakers for the beach. Again, standard search results appear almost immediately, and again, AI results are generated a few seconds later. This time, there’s a short summary at the top detailing what you should care about in such a speaker: battery life, water resistance, sound quality. Links to three buying guides sit off to the right, and below are shopping links for a half-dozen good options, each with an AI-generated summary next to it. I ask Reid to follow up with the phrase “under $100,” and she does so. The snapshot regenerates with new summaries and new picks.

A screenshot of SGE, Google’s new search product, showing Bluetooth speakers. Image: Google
These AI snapshots will appear at the top of Search and pull information from all over the web.

This is the new look of Google’s search results page. It’s AI-first, it’s colorful, and it’s nothing like you’re used to. It’s powered by some of Google’s most advanced LLM work to date, including a new general-purpose model called PaLM2 and the Multitask Unified Model (MuM) that Google uses to understand multiple types of media. In the demos I saw, it’s often extremely impressive. And it changes the way you’ll experience search, especially on mobile, where that AI snapshot often eats up the entire first page of your results.

There are some caveats: to get access to these AI snapshots, you’ll have to opt in to a new feature called Search Generative Experience (SGE for short), part of an also-new feature called Search Labs. Not all searches will spark an AI answer — the AI only appears when Google’s algorithms think it’s more useful than standard results, and sensitive subjects like health and finances are currently set to avoid AI interference altogether. But in my brief demos and testing, it showed up whether I searched for chocolate chip cookies, Adele, nearby coffee shops, or the best movies of 2022. AI may not be killing the 10 blue links, but it’s definitely pushing them down the page.

SGE, Google executives tell me over and over, is an experiment. But they’re also clear that they see it as a foundational long-term change to the way people search. AI adds another layer of input, helping you ask better and richer questions. And it adds another layer of output, designed to both answer your questions and guide you to new ones.

An opt-in box at the top of search results might sound like a small move from Google compared to Microsoft’s AI-first Bing redesign or the total newness of ChatGPT. But SGE amounts to the first step in a complete rethinking of how billions of people find information online — and how Google makes money. As pixels on the internet go, these are as consequential as it gets.

A screenshot of an AI snapshot about Bryce Canyon Image: Google
The AI snapshots borrow colors from the content they discover and change depending on what you search.

Asked and answered

Google feels pretty good about the state of its search results. We’re long past the “10 blue links” era of 25 years ago when you Googled by typing in a box and getting links in return. Now you can search by asking questions aloud or snapping a picture of the world, and you might get back everything from images to podcasts to TikToks.

Many searches are already well-served by these results. If you’re going to Google and searching “Facebook” to land on facebook.com or you’re looking for the height of the Empire State Building, you’re already good to go.

But there’s a set of queries for which Google has never quite worked, which is where the company is hoping AI can come in. Queries like “Where should I go in Paris next week?” or “What’s the best restaurant in Tokyo?” These are hard questions to answer because they’re not actually one question. What’s your budget? What days are all the museums open in Paris? How long are you willing to wait? Do you have kids with you? On and on and on.

“The bottleneck turns out to be what I call ‘the orchestration of structure,’” says Prabhakar Raghavan, the SVP at Google who oversees Search. Much of that data exists somewhere on the internet or even within Google — museums post hours on Google Maps, people leave reviews about wait times at restaurants — but putting it all together into something like a coherent answer is really hard. “People want to say, ‘plan me a seven-day vacation,” Raghavan says, “and they believe if the language model outputs, it should be right.”

One way to think about these is simply as questions with no right answer. A huge percentage of people who come to Google aren’t looking for a piece of information that exists somewhere. They’re looking for ideas, looking to explore. And since there’s also likely no page on the internet titled “Best vacation in Paris for a family with two kids, one of whom has peanut allergies and the other loves soccer, and you definitely want to go to the Louvre on the quietest possible day of the week,” the links and podcasts and TikToks won’t be much help.

Because they’re trained on a huge corpus of data from all over the internet, large language models can help answer those questions by essentially running lots of disparate searches at once and then combining that information into a few sentences and a few links. “Lots of times you have to take a single question and break it into 15 questions” to get useful information from search, Reid says. “Can you just ask one? How do we change how the information is organized?”

That’s the idea, but Raghavan and Reid are both quick to point out that SGE still can’t do these completely creative acts very well. Right now, it’s going to be much more handy for synthesizing all the search data behind questions like “what speaker should I buy to take into the pool.” It’ll do well with “what were the best movies of 2022,” too, because it has some objective Rotten Tomatoes-style data to pull from along with the internet’s many rankings and blog posts on the subject. AI appears to make Google a better information-retrieval machine, even if it’s not quite ready to be your travel agent.

One thing that didn’t show up in most SGE demos? Ads. Google is still experimenting with how to put ads into the AI snapshots, though rest assured, they’re coming. Google’s going to need to monetize the heck out of AI for any of this to stick.

A screenshot of a search results page with ads at the top. Image: Google
Right now, AI hasn’t really changed how Google ads work. But it will.

The Google Bot

At one point in our demo, I asked Reid to search only the word “Adele.” The AI snapshot contained more or less what you’d expect — some information about her past, her accolades as a singer, a note about her recent weight loss — and then threw in that “her live performances are even better than her recorded albums.” Google’s AI has opinions! Reid quickly clicked the bear claw and sourced that sentence to a music blog but also acknowledged that this was something of a system failure.

Google’s search AI is not supposed to have opinions. It’s not supposed to use the word “I” when it answers questions. Unlike Bing’s multiple-personality chaos or ChatGPT’s chipper helper or even Bard’s whole “droll middle school teacher” vibe, Google’s search AI is not trying to seem human or affable. It’s actually trying very hard to not be those things. “You want the librarian to really understand you,” Reid says. “But most of the time, when you go to the library, your goal is for them to help you with something, not to be your friend.” That’s the vibe Google is going for.

The reason for this goes beyond just that strange itchy feeling you get talking to a chatbot for too long. And it doesn’t seem like Google is just trying to avoid super horny AI responses, either. It’s more a recognition of the moment we’re in: large language models are suddenly everywhere, they’re far more useful than most people would have guessed, and yet they have a worrying tendency to be confidently wrong about just about everything. When that confidence comes in perfectly formed paragraphs that sound good and make sense, people are going to believe the wrong stuff.

A few executives I spoke to mentioned a tension in AI between “factual” and “fluid.” You can build a system that is factual, which is to say it offers you lots of good and grounded information. Or you can build a system that is fluid, feeling totally seamless and human. Maybe someday you’ll be able to have both. But right now, the two are at odds, and Google is trying hard to lean in the direction of factual. The way the company sees it, it’s better to be right than interesting.

Google projects a lot of confidence in its ability to be factually strong, but recent history seems to suggest otherwise. Not only is Bard less wacky and fun than ChatGPT or Bing, but it’s also often less correct — it makes basic mistakes in math, information retrieval, and more. The PaLM2 model should improve some of that, but Google certainly hasn’t solved the “AI lies” problem by a long shot.

There’s also the question of when AI should appear at all. Sometimes it’s obvious: the snapshots shouldn’t appear if you ask sensitive medical questions, Reid says, or if you’re looking to do something illegal or harmful. But there’s a wide swath of searches where AI may or may not be useful. If I search “Adele,” some basic summary information at the top helps; if I search “Adele music videos,” I’m much more likely to just want the YouTube videos in the results.

Google can afford to be cautious here, Reid says, because the fail state is just Google search. So whenever the snapshot shouldn’t appear, or whenever the model’s confidence score is low enough that it might not be more useful than the top few results, it’s easy to just not do anything.

Bold and responsible

Compared to the splashy launch of the new Bing or the breakneck developmental pace of ChatGPT, SGE feels awfully conservative. It’s an opt-in, personality-free tool that collates and summarizes your search results. For Google, suddenly in an existential crisis over the fact that AI is changing the way people interact with technology, is that enough?

A couple of executives used the same phrase to describe the company’s approach: “bold and responsible.” Google knows it has to move fast — not only are chatbots booming in popularity, but TikTok and other platforms are stealing some of the more exploratory search out from under Google. But it also has to avoid making mistakes, giving people bad information, or creating new problems for users. To do that would be a PR disaster for Google, it would be yet more reason for people to try new products, and it would potentially destroy the business that made Google a trillion-dollar company.

So, for now, SGE remains opt-in and personality-free. Raghavan says he’s comfortable playing a longer game: “knee-jerk reacting to some trend is not necessarily going to be the way to go.” He’s also convinced that AI is not some panacea that changes everything, that 10 years from now, we’ll all do everything through chatbots and LLMs. “I think it’s going to be one more step,” he says. “It’s not like, ‘Okay, the old world went away. And we’re in a whole new world.’”

In other words, Google Bard is not the future of Google search. But AI is. Over time, SGE will start to come out of the labs and into search results for billions of users, mingling generated information with links out to the web. It will change Google’s business and probably upend parts of how the web works. If Google gets it right, it will trade 10 blue links for all the knowledge on the internet, all in one place. And hopefully telling the truth.

Взято отсюда

Google launches an AI coding bot for Android developers

2023-05-11 16:32:55
A screenshot showing Studio Bot
Screenshot: Emma Roth / The Verge

Google is launching a new AI-powered coding bot for Android developers. During its I/O event on Wednesday, Google announced that the tool, called Studio Bot, will help developers build apps by generating code, fixing errors, and answering questions about Android.

According to Google, the bot is built on Codey, the company’s new foundational coding model that stems from its updated PaLM 2 large language model (LLM). Studio Bot supports both the Kotlin and Java programming languages and will live directly in the toolbar on Android Studio. There, developers can get quick answers to their questions or even have the bot debug a portion of their code.

 Screenshot: Emma Roth / The Verge

While Google notes that developers don’t need to share their source code with Google in order to use Studio Bot, the company will receive data on the conversations they have with the tool. Google says the bot is still in “very early days” but that it will continue training it to improve its answers. It’s also currently only available to developers in the US for now via the Canary channel, and there’s no word on when it will see a global launch.

The new AI-powered coding tool comes as part of Google’s continued push into AI. After making its Bard chatbot available in early access in March, Google has been gradually adding new features, including the ability to generate, debug, and explain lines of code. Packaging these features in a standalone coding assistant pits Google against Microsoft and Amazon, both of which have AI-powered coding tools of their own that developers can integrate into a variety of integrated development environments (IDE).

While Microsoft rolled out a ChatGPT-like assistant to GitHub Copilot earlier this year, Amazon made CodeWhisperer available to developers for free last month. Studio Bot has a far more limited reach than Amazon and GitHub’s integrative coding assistants, though, as it’s only useful for Android developers. Google’s new Duet AI coding tool should help fill that gap, but it’s only available to Google Cloud users.


Related:

Взято отсюда

Scammers used AI-generated Frank Ocean songs to steal thousands of dollars

2023-05-11 16:00:44

More AI-generated music mimicking a famous artist has made the rounds — while making lots of money for the scammer passing it off as genuine. A collection of fake Frank Ocean songs sold for a reported $13,000 CAD ($9,722 in US dollars) last month on a music-leaking forum devoted to the Grammy-winning singer, according toVice. If the story sounds familiar, it’s essentially a recycling of last month’s AI Drake / The Weeknd fiasco.

As generative AI takes the world by storm — Google just devoted most of its I/O 2023 keynote to it — people eager to make a quick buck through unscrupulous means are seizing the moment before copyright laws catch up. It’s also caused headaches for Spotify, which recently pulled not just Fake Drake but tens of thousands of other AI-generated tracks after receiving complaints from Universal Music.

The scammer, who used the handle mourningassasin, told Vice they hired someone to make “around nine” Ocean songs using “very high-quality vocal snippets” of the Thinkin Bout You singer’s voice. The user posted a clip from one of the fake tracks to a leaked-music forum and claims to have quickly convinced its users of its authenticity. “Instantly, I noticed everyone started to believe it,” mourningassasin said. The fact that Ocean hasn’t released a new album since 2016 and recently teased an upcoming follow-up to Blond may have added to the eagerness to believe the songs were real.

The scammer claims multiple people expressed interest in private messages, offering to “pay big money for it.” They reportedly fetched $3,000 to $4,000 for each song in mid to late April. The user has since been banned from the leaked-music forum, which may be having an existential crisis as AI-generated music makes it easier than ever to produce convincing knockoffs. “This situation has put a major dent in our server’s credibility, and will result in distrust from any new and unverified seller throughout these communities,” said the owner of a Discord server where the fake tracks gained traction.

This article originally appeared on Engadget at https://www.engadget.com/scammers-used-ai-generated-frank-ocean-songs-to-steal-thousands-of-dollars-222042845.html?src=rss Взято отсюда

Developer-focused portal Stack Overflow lays off 10% staff

2023-05-11 15:43:55

Stack Overflow, a question-and-answer portal for developers, would lay off 10% of its workforce, the company announced on Thursday.

The job cuts, which will affect at least 58 employees, are a result of the company’s renewed focus on profitability due to macroeconomic concerns, CEO Prashanth Chandrasekar said in a blog post.

“Our focus for this fiscal year is on profitability and that, along with macroeconomic pressures, led to today’s changes. They were also the result of taking a hard look at our strategic priorities for this fiscal year as well as our organizational structure as we invest in the continued growth of Stack Overflow for Teams and pursue agility and flexibility,” Chandrasekar said.

To read this article in full, please click here

Взято отсюда

Spotify has reportedly removed tens of thousands of AI-generated songs

2023-05-09 16:38:16

has reportedly pulled tens of thousands of tracks from generative AI company Boomy. It's said to have removed seven percent of the songs created by the startup's systems, which underscores the swift proliferation of AI-generated content on music streaming platforms.

Universal Music reportedly told Spotify and other major services that it detected suspicious streaming activity on Boomy's songs. In other words, there were suspicions that bots were being used to boost listener figures and generate ill-gotten revenue for uploaders. Spotify pays royalties to artists and rights holders on a per-listen basis.

“Artificial streaming is a longstanding, industry-wide issue that Spotify is working to stamp out across our service,” Spotify, which confirmed that it had taken down some Boomy tracks, told Insider. "When we identify or are alerted to potential cases of stream manipulation, we mitigate their impact by taking action that may include the removal of streaming numbers and the withholding of royalties. This allows us to protect royalty payouts for honest, hardworking artists."

Universal Music's chief digital officer Michael Nash told the Financial Times, which first reported on Spotify removing Boomy's tracks, that his company is "always encouraged when we see our partners exercise vigilance around the monitoring or activity on their platforms."

AI-generated music hit the headlines last month after a song that appeared to include vocals from Drake and The Weeknd went viral. Universal Music Group, which represents both artists, claimed that using the duo's voices to train generative AI systems constituted “a breach of our agreements and a violation of copyright law." Both Spotify and Apple Music removed the song from their libraries.

Music industry figures have been sounding the alarm bells about the overarching impact of AI-generated tracks, as well as people using bots to drive up listener figures and siphon money out of the kitties that streaming services use to pay royalties.

Boomy, which opened its doors in 2021, enables people to generate songs based on text inputs. Over the weekend, the company said that "curated delivery to Spotify of new releases by Boomy artists has been re-enabled."

Boomy says its users "have created 14,554,448 songs" or just under 14 percent of "the world's recorded music." Its website states that users can create original songs in seconds, then upload them "to streaming platforms and get paid when people listen."

This article originally appeared on Engadget at https://www.engadget.com/spotify-has-reportedly-removed-tens-of-thousands-of-ai-generated-songs-154144262.html?src=rss Взято отсюда

Production of Apple TV+ show ‘Severance’ suspended amid writers strike

2023-05-09 14:03:32

Apple TV+ renewed its sci-fi show Severance for a second season last year, even before the season finale of the first season aired. However, the show had been facing “backstage drama,” according to a recent report, and now things have gotten worse. Amid the ongoing writers’ strike, the production of Severance’s second season has been completely suspended in New York.

more…

The post Production of Apple TV+ show ‘Severance’ suspended amid writers strike appeared first on 9to5Mac.

Взято отсюда

Apple announces Final Cut Pro and Logic Pro coming to iPad later this month

2023-05-09 13:55:43

Mic drop moment for pro apps team at

Apple
this morning. Apple just announced that
Final Cut Pro
and
Logic Pro
are officially and finally coming to iPad. Final Cut Pro is Apple’s professional video editing software that has been exclusive to Mac before now. The same is true for Logic Pro, Apple’s professional audio editing software for Mac. Both apps will land on the iPad later this month.

more…

The post Apple announces Final Cut Pro and Logic Pro coming to iPad later this month appeared first on 9to5Mac.

Взято отсюда

EU Warns Apple About Limiting Speeds of Uncertified USB-C Cables for iPhones

2023-05-07 02:01:56
Last year, the EU passed legislation that will require the iPhone and many other devices with wired charging to be equipped with a USB-C port in order to be sold in the region.
Apple
has until December 28, 2024 to adhere to the law, but the switch from Lightning to USB-C is expected to happen with iPhone 15 models later this year.


It was rumored in February that Apple may be planning to limit charging speeds and other functionality of USB-C cables that are not certified under its "Made for iPhone" program. Like the Lightning port on existing iPhones, a small chip inside the USB-C port on iPhone 15 models would confirm the authenticity of the USB-C cable connected.

"I believe Apple will optimize the fast charging performance of MFi-certified chargers for the iPhone 15," Apple analyst Ming-Chi Kuo said in March.

In response to this rumor, European Commissioner Thierry Breton has sent Apple a letter warning the company that limiting the functionality of USB-C cables would not be permitted and would prevent iPhones from being sold in the EU when the law goes into effect, according to German newspaper Die Zeit. The letter was obtained by German press agency DPA, and the report says the EU also warned Apple during a meeting in mid-March.

Given that it has until the end of 2024 to adhere to the law, Apple could still move forward with including an authentication chip in the USB-C port on iPhone 15 models later this year. And with iPhone 16 models expected to launch in September 2024, even those devices would be on the market before the law goes into effect.

The report says the EU intends to publish a guide to ensure a "uniform interpretation" of the legislation by the third quarter of this year.

It is worth emphasizing that Apple potentially limiting the functionality of uncertified USB-C cables connected to iPhone 15 models is only a rumor for now, so it remains to be seen whether or not the company actually moves forward with the alleged plans. iPads with USB-C ports do not have an authentication chip for this purpose.

(Thanks, Manfred!)
Related Roundups: iPhone 15, iPhone 15 Pro
Related Forum: iPhone

This article, "EU Warns Apple About Limiting Speeds of Uncertified USB-C Cables for iPhones" first appeared on MacRumors.com

Discuss this article in our forums

Взято отсюда

Красная линия

2023-05-07 01:52:07
Разговор о красных линиях мне остро напоминает анекдот про Леонида Ильича, который, вернувшись из Индии попросил своего гримера поставить ему красную точку между бровей, как у Индиры Ганди... Гример удивился и спрашивает: "Зачем, Леонид Ильич??" - "А мне Индира Ганди скаазала после встречи: Всё у тебя, Лёня, хорошо. Только вот здесь не хватает..." Взято отсюда