Jump to content
talkfootball365
  • Welcome to talkfootball365!

    The better place to talk football.

Artificial Intelligence & Robotics


football forum
 Share

Recommended Posts

  • Subscriber
Quote

New hardware used in ultra-fast analogue deep learning device

The research promises faster, more energy efficient artificial intelligence produced at lower cost.

download.png

Artificial intelligence (AI), or machine learning, is  taking  the computing world by storm although it’s been under development for decades. AI tools are changing the way we use data and computers in an array of fields from medicine to traffic control. New research shows how we can make AI even more efficient and useful.

The name “artificial intelligence” often stirs the imagination and incites imagery of sentient robots. But the reality is  different. Machine learning does not emulate human intelligence. What it does do, however, is mimic the complex neural pathways that exist in our own brains.

This mimicry is the key to which AI owes its power. But it is power that comes at great cost – both financially and in terms of the energy required to run the machines.

Research coming out of the Massachusetts Institute of Technology (MIT) and published in Science is part of a growing subset of AI research focused on AI architecture which is cheaper to build, quicker and more energy efficient.


The multidisciplinary team used programmable resistors to produce “analogue deep learning” machines. Just as transistors are the core of digital processors, the resistors are built into repeating arrays to create a complex, layered network of artificial “neurons” and “synapses”. The machine can achieve complicated tasks such as image recognition and natural language processing.

Humans learn through the weakening and strengthening of the synapses which connect our neurons – the brain cells.

Whereas digital deep learning weakens and strengthens links between artificial neurons through algorithms, analogue deep learning occurs through increasing or decreasing the electrical conductance of the resistors.

Increased conductance in the resistors is achieved by pushing more protons into them, attracting more electron flow. This is done using a battery-like electrolyte which allows protons to pass, but blocks electrons.

“The working mechanism of the device is electrochemical insertion of the smallest ion, the proton, into an insulating oxide to modulate its electronic conductivity. Because we are working with very thin devices, we could accelerate the motion of this ion by using a strong electric field and push these ionic devices to the nanosecond operation regime,” says senior author Bilge Yildiz, professor in the Nuclear Science and Engineering, and Materials Science and Engineering departments at MIT.

Using inorganic phosphosilicate glass (PSG) as the base inorganic compound for the resistors, the team found their analogue deep learning device could process information one million times faster than previous attempts. This makes their machine about one million times faster than the firing of our own synapses.

“The action potential in biological cells rises and falls with a timescale of milliseconds, since the voltage difference of about 0.1 volt is constrained by the stability of water,” says senior author Ju Li, professor of materials science and engineering. “Here we apply up to 10 volts across a special solid glass film of nanoscale thickness that conducts protons, without permanently damaging it. And the stronger the field, the faster the ionic devices.”

The resistor can run for millions of cycles without breaking down thanks to the fact that the protons don’t damage the material.

“The speed certainly was surprising. Normally, we would not apply such extreme fields across devices, in order to not turn them to ash. But instead, protons ended up shuttling at immense speeds across the device stack, specifically a million times faster compared to what we had before. And this movement doesn’t damage anything, thanks to the small size and low mass of protons,” says lead author and MIT postdoc Murat Onen.

“The nanosecond timescale means we are close to the ballistic or even quantum tunnelling regime for the proton, under such an extreme field,” adds Li.

PSG also makes the device extremely energy efficient and is compatible with silicon fabrication techniques. It also means the device can be integrated into commercial computing hardware.

“With that key insight, and the very powerful nanofabrication techniques, we have been able to put these pieces together and demonstrate these devices are intrinsically very fast and operate with reasonable voltages,” says senior author Jesús A. del Alamo, a professor in MIT’s Department of Electrical Engineering and Computer Science (EECS). “This work has really put these devices at a point where they now look really promising for future applications.”

“Once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft,” Onen adds.

Analogue deep learning has two key advantages over its digital cousin.

Onen says computation is performed within the memory device rather than being transferred from memory to the processors.

Analogue processors also perform operations simultaneously, rather than needing more time to do new computations.

Now that the effectiveness of the device has been shown, the team aims to engineer them for high-volume manufacturing. They also plan to remove factors which limit the voltage required to efficiently the protons.

“The collaboration that we have is going to be essential to innovate in the future. The path forward is still going to be very challenging, but at the same time it is very exciting,” Professor del Alamo says.

?id=200195&title=New+hardware+used+in+ulhttps://cosmosmagazine.com/technology/deep-learning-analogue-fast/

 

Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites

  • Subscriber

Artificial intelligence tool swings into action, discovering alternative physics variables!

Analysing pendulum videos, the AI identified variables not present in current mathematics.

download.png

An artificial intelligence tool has examined physical systems and not surprisingly, found new ways of describing what it found.

How do we make sense of the universe? There’s no manual. There’s no prescription.

At its most basic, physics helps us understand the relationships between “observable” variables – these are things we can measure. Velocity, energy, mass, position, angles, temperature, charge. Some variables like acceleration can be reduced to more fundamental variables. These are all variables in physics which shape our understanding of the world.

These variables are bound together through equations.

Albert Einstein’s most famous equation, E = mc2, summarises the relationship between the variables energy (E) and mass (m), using the constant: the speed of light (c). In fact, all of Einstein’s very complicated Theory of Special Relativity can be reduced to relationships between three variables: energy, mass and velocity.

There’s nothing sacred about our choice of variable. The variables and mathematics we choose have stood the test of time as the ones that make sense for a given theory or physical system.

But, what if we were to find other physical variables to solve the same problems? It wouldn’t change the problem… or the solution. But it might give us new insights into the inner workings of the universe and accelerate scientific discovery.


FULL REPORT

Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites

  • Subscriber
Quote

Patch me up, Scotty! Remote surgery robot destined for ISS

Miniature medical robot named MIRA heads to space to test its surgical skills.

A.png

Strap yourself in so you don’t float away, select the required procedure, lie back and relax as your autonomous surgery robot patches you up from whatever space ailment bothers you. Sound far-fetched?

C.png

Not according to Professor Shane Farritor, from the University of Nebraska-Lincoln, who has just received funding from NASA to prepare his miniature surgical robot for a voyage to the International Space Station (ISS) in 2024.

MIRA, which stands for “miniaturised in vivo robotic assistant” is comparatively little for a surgery-performing machine – small enough to fit inside a microwave-sized experimental locker within the ISS. The brainchild of Farritor and colleagues at the start-up company Virtual Incision, MIRA has been under development for almost 20 years.

The ultimate aim for MIRA is to be able to perform surgery autonomously and remotely, which has far-reaching ramifications for urgent surgery in the field – whether that’s in the depths of space, a remote location or even in a war-torn region.


Initially MIRA won’t go near anyone’s body. Once on the ISS, it will autonomously perform tasks designed to mimic the movements required for surgery, such as cutting stretched rubber bands and pushing metal rings along a wire.

download.png

Being autonomous is important as it won’t need to access bandwidth to communicate back to Earth.

MIRA has already successfully completed surgery-like tasks via remote operation including a colon resection.

Space is the next frontier.

Farritor says, as people go further and deeper into space, they might need surgery. “We’re working toward that goal.”

The stint on the ISS will not only mark the most autonomous operation so far, but it will also provide insight into how such devices might function in zero gravity.

The dream goal is for MIRA to function entirely on its own, says Farritor. Just imagine: “the astronaut flips a switch, the process starts, and the robot does its work by itself. Two hours later, the astronaut switches it off and it’s done”.

As anyone who has seen the scene in the movie, The Martian, can attest, it would certainly make pulling a wayward antenna spike out of yourself from within a deserted Martian habitat station far more comfortable.

https://cosmosmagazine.com/health/remote-surgery-robot-destined-for-iss/

 

Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites

  • Subscriber
Quote

Curved-space robot defies known laws of physics, heralding new locomotive technology possibilities

The discovery has implications for locomotion without propulsion.

download.png

A robot engineered at Georgia Institute of Technology (Georgia Tech) has done the unthinkable and flouted a steadfast law of motion, suggesting that new laws need to be defined. Such new principles may have applications in new forms of locomotion without propellants.

We’ve all seen the hilarious slapstick gag where the unwitting individual steps on a banana peel, landing comically on their rump. It may not seem like it, but the quip relies on the fact that human locomotion, like all locomotion, is based on Newton’s third law of motion.

Newton’s third law states that for every action there is an equal and opposite reaction. So, when a human takes a step, we push against the Earth and the Earth pushes back, propelling us forward. But this only works thanks to friction. Without friction (or with minimal friction, for example, when there is a slimy banana peel on the ground) there is no push – we just slide straight over the ground and can’t move forward, falling unceremoniously back to Earth.

The same is true of all locomotion. Rockets, for example, eject massive amounts of matter at high speed to push themselves in the opposite direction. Animals in the sea and air push against water and atmosphere respectively. There is always a push to move.

But the Georgia Tech robot has bypassed this need for a thrust in order to change momentum. It does this by making use of curved space.

See, we generally think of space in terms of what are called Cartesian coordinates – the x-, – and -axes of three-dimensional coordinate space that we all used in high school. These axes all jut out from an “origin point” at right angles to each other and continue ad infinitum in straight lines.

But space can be visualised as curved as well, instead of just plain, boring and flat.

The results of the Georgia Tech study are published in the Proceedings of the National Academy of Sciences (PNAS). The team claim their findings defy the requirement of Newtonian dynamics “that a stationary object cannot move without exchanging momentum with its environment.”

Confined to a spherical surface in a highly isolated system, the predominant effects felt by this robot were not from its environment but the curvature of the space itself.

The robot, as seen in the video below, gyrates and jiggles changing shape as it does so. But these effects alone in normal, flat space would not see it move in any particular direction.

“We let our shape-changing object move on the simplest curved space, a sphere, to systematically study the motion in curved space,” says lead researcher Zeb Rocklin, assistant professor in the School of Physics at Georgia Tech. “We learned that the predicted effect, which was so counter-intuitive it was dismissed by some physicists, indeed occurred: as the robot changed its shape, it inched forward around the sphere in a way that could not be attributed to environmental interactions.”

To make sure that the effects induced by the curvature of the robot’s space dominated, the physicists had to isolate the system as much as possible from external forces. Only then could the team ensure minimal interaction or exchange of momentum with the environment.

The curved space was produced by placing a set of motors drive on curved tracks. The tracks were then attached to a rotating shaft to produce a spherical space.

Friction was curtailed using air bearings and bushings – low heat and low mess alternatives to ball bearings. Gravity was diminished by aligning the rotating shaft with Earth’s gravity.

The robot felt only slight forces due to friction and gravity, but the two effects were seen to hybridise with the curvature of the space itself to produce a strange dynamic with properties which could not have been produced by either friction or gravity on their own. So, the team demonstrated not only how curved space can be realised, but also how it fundamentally challenges basic concepts attributed to the laws of flat space.

Rocklin hopes the methods used will allow further experimental investigations of curved space.

While the observed effects due to curved space are small, the researchers believe that increasingly precise robotics will see these curvature-induced effects having practical applications. Similar to how slight changes in the frequency of light due to gravity became crucial to GPS navigation, the team expects their findings and future findings in curved-space dynamics will be applicable in engineering.

The principles of how the curvature of space can be harnessed for locomotion may ultimately be useful in circumnavigating the highly curved space around black holes. “This research also relates to the ‘Impossible Engine’ study,” says Rocklin. “Its creator claimed that it could move forward without any propellant. That engine was indeed impossible, but because spacetime is very slightly curved, a device could actually move forward without any external forces or emitting a propellant – a novel discovery.”

?id=201283&title=Curved-space+robot+defihttps://cosmosmagazine.com/technology/curved-space-robot-defies-physics/

 

Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites

  • Subscriber
Quote

Necrobic Spiders
Getting a grip with “necrobotic” spiders

download.png

Mechanical engineers have essentially turned dead spiders into “claw machines” and used them as mechanical grippers to pick up objects heavier than themselves, according to a new study in Advanced Science.

“It happens to be the case that the spider, after it’s deceased, is the perfect architecture for small scale, naturally derived grippers,” says senior author Daniel Preston, assistant professor of Mechanical Engineering at Rice University in Texas, in the US.

Spiders use hydraulics to move their limbs; the prosoma chamber near their heads contracts to send blood to limbs (which forces them to extend) and when the pressure is relieved the legs contract.

Using wolf spider cadavers, they tapped into this chamber with a needle to deliver a minute amount of air, activating and extending the legs almost instantly.

But what on Earth are the applications for these so-called spidery “necrorobots”?

According to Preston: “there are a lot of pick-and-place tasks we could look into, repetitive tasks like sorting or moving objects around at these small scales, and maybe even things like assembly of microelectronics.

“Also, the spiders themselves are biodegradable. So we’re not introducing a big waste stream, which can be a problem with more traditional components.”

 

https://cosmosmagazine.com/technology/necrorobotics-spiders-electrodes/

 

 

Link to comment
Share on other sites

  • Subscriber
Quote

A.png

It isn’t always the best idea to design robots that look like us – just ask R2-D2

Humanoids have long been regarded as the ‘holy grail’ but there’s room for a lot more diversity in the robot design world.

In August of 2021, Tesla CEO Elon Musk revealed his company’s plan to produce a humanoid robot called 'Optimus'. The announcement, accompanied by a dancing actor in a skintight robot costume, was ridiculed by sceptics, but Tesla is not alone in the quest to create human-shaped machines.

Research labs and companies around the globe are investing serious amounts of money in these complex robots, for uses from warehouses to nursing homes. Their pitch is two-fold: the human form is relatable, and it fits seamlessly into our existing infrastructure. These reasons are sensible, but there might be a better way forward.

Humanoids, sometimes called the 'holy grail' of robotics, are robots with a torso, two legs, two arms, and a head. Google image search the word 'robot', and your browser will fill with pictures of humanoids.

Some robot creators argue that the autonomous machines we’ll interact with in the future need to look like us, because we relate best to other humans. I've found in my own research that we connect emotionally with machines onto which we can project our own traits, but the notion that we most relate to a human shape may be missing something.

People name their robot vacuum. Would they like it better if it was a humanoid that walked around vacuuming? Probably not. When something has a human form, we have very different expectations for its intelligence and how it will perform. Our current robots don’t live up to those expectations, nor will we have a robot butler anytime soon.

Fortunately for robot designers, things don’t need to be humanoid for us to relate to them - they just need to mimic cues that we recognise. Artists and animators have honed the art of capturing our emotional expression and then putting it into other shapes - think Bambi, Cars, or the Pixar lamp.

Japanese cities have attracted visitors with mascots like Kumamon, a simple, cute bear character created by the government of Kumamoto Prefecture to boost tourism. And thousands of Star Wars fans love R2-D2, who is basically a tin can on wheels and is yet more emotionally compelling than most humanoids.

Avoiding the human form when designing robots not only avoids the problem of expectation management, it can also side-step a minefield of social injustices. For example, researchers from the University of Bielefeld, Germany, found that people will rate a humanoid robot with long hair better suited for stereotypical female tasks like household and care work and less suitable for doing technical repairs, compared to an identical robot with short hair.

Previous studies have also shown that artificial agents with human faces are seen as more attractive, trustworthy, persuasive, and intelligent when they’re of the same ethnicity as the person rating them. Technology that looks too human not only reflects our biases, it can also entrench and perpetuate harmful stereotypes.

There’s also the logistical argument for humanoid robots: we live in a world built for humans, with stairs and door knobs and narrow passageways. In order to navigate these spaces, some designers say, we need robots that are built like us. It’s certainly true that many spaces are difficult to navigate with wide bodies, wheels, or treads. But as we look to the future, maybe there are better choices.

Robots can be smaller than the eye can see, or larger than a house. They can be encased in metals or soft materials. They can roll, climb, slither, jump, or ooze from point A to B. According to roboticist Prof Robin Murphy of Texas A&M University, the best form is “whatever shape gets the job done.”

Ironically, challenging the notion that the 'right way' to do things is the way humans do them creates opportunities to reach broader human demographics, specifically people who often find themselves left out of the design process entirely.

Laurel Riek, a roboticist at the University of California, San Diego, has pointed out that rather than throwing our funds into expensive, difficult-to-engineer, bipedal humanoid robots, we could invest in making infrastructure more friendly for wheelchairs, walkers, and strollers. If a wheelchair can access a space, so can a simple and efficient robot on wheels.

Human beings are far from uniform in our construction and abilities, and if we design our world to reflect that, we could radically increase accessibility at the same time as developing better, cheaper robots with a greater range of skills. And besides, why take the stairs when you can climb the walls?

While there will always be some use cases for humanoid robots, we may be able to do better for work, companionship, and society than replicating what we already have. As robots move from behind factory walls into shared spaces, robotics provides us a wonderful opportunity: it allows us to think outside of ourselves, not only in designing our robots, but also in our ideas for how to construct a better world.

https://www.sciencefocus.com/news/isnt-always-best-idea-design-robots-look-like-us-just-ask-r2-d2/

 

Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites

  • 4 weeks later...
  • Subscriber
Quote

Existing computer simulations are not yet powerful enough to harness AlphaFold for drug discovery

Models can’t accurately predict the interactions of many protein structures and molecules at the same time.

A.png

Scientists want to use computer models to help reduce the cost and time associated with drug discovery, to develop new antibiotics to fight the growing crisis of antimicrobial resistance. But a new study shows that using the latest tools together are little better than guesswork at the moment.

This is a barrier to drug development – at least as the computer models exist now – according to the study a new study published in Molecular Systems Biology.

Researchers from Massachusetts Institute of Technology (MIT) explored whether existing computer programs could accurately predict the interactions between antibacterial compounds and bacterial protein structures generated by Google’s new tool called AlphaFold – an artificial intelligence program that generates 3D protein structures from their amino acid sequence.

AlphaFold is exciting the science world.

But the MIT team found that the predictions of existing models, called molecular docking simulations, performed little better than chance.

“Breakthroughs such as AlphaFold are expanding the possibilities for in silico (ie by computers) drug discovery efforts, but these developments need to be coupled with additional advances in other aspects of modelling that are part of drug discovery efforts,” says senior author James Collins, professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering.

“Our study speaks to both the current abilities and the current limitations of computational platforms for drug discovery.”

The hope is that scientists could use modelling to perform large-scale screening for new compounds that affect previously untargeted bacterial proteins. The end result being the development of new antibiotics that work in unprecedented ways.

The team studied the interactions of 296 essential proteins from Escherichia coli with 218 antibacterial compounds, using molecular docking simulations that predict how strongly two molecules will bind together based on their shapes and physical properties.

Previously, these simulations have been used successfully to screen large numbers of compounds against a single protein target to identify compounds that bind the best. But here, the predictions became much less accurate when attempting to screen many compounds against many potential protein targets.

In fact, the model produced false positive rates similar to true positive rates when simulating interactions between existing drugs and their targets.

“Utilising these standard molecular docking simulations, we obtained an auROC value of roughly 0.5, which basically says you’re doing no better than if you were randomly guessing,” Collins explains.

 

But this wasn’t due to some fault of AlphaFold, as similar results occurred when they used the same modelling approach with protein structures that had been experimentally determined in the lab.

“AlphaFold appears to do roughly as well as experimentally determined structures, but we need to do a better job with molecular docking models if we’re going to utilise AlphaFold effectively and extensively in drug discovery,” adds Collins.

One explanation for this poor performance is that the protein structures fed into the model are static, but in real biological systems proteins are flexible and often shift their configurations.

The researchers were able to improve the performance of the molecular docking simulations by running them through four additional machine-learning models trained on data that describe how proteins and other molecules interact with each other.

https://cosmosmagazine.com/technology/drug-discovery-alphafold-simulations/

 

Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites

  • Subscriber
Quote

Realistic androids coming closer, as scientists teach a robot to share your laughter

Researchers think it will be only 10 to 20 years before we can finally have a casual chat with a robot like we would with a friend.

B.png

Do you ever laugh at an inappropriate moment?

A team of Japanese researchers has taught a robot when to laugh in social situations, which is a major step towards creating an android that will be “like a friend.”

“We think that one of the important functions of conversational AI is empathy,” says Dr Koji Inoue, an assistant professor at Kyoto University’s Graduate School of Informatics, and lead author on a paper describing the research, published in Frontiers in Robotics and AI.

“Conversation is, of course, multimodal, not just responding correctly. So we decided that one way a robot can empathize with users is to share their laughter, which you cannot do with a text-based chatbot.”

The researchers trained an AI with data from 80 speed dating dialogues, from a matchmaking marathon with Kyoto University students. (Imagine meeting a future partner at exercise designed to teach a robot to laugh…)

“Our biggest challenge in this work was identifying the actual cases of shared laughter, which isn’t easy, because as you know, most laughter is actually not shared at all,” says Inoue.

“We had to carefully categorise exactly which laughs we could use for our analysis and not just assume that any laugh can be responded to.”

They then added this system to a hyper-realistic android named Erica, and tested the robot on 132 volunteers.

Participants listened to one of three different types of dialogue with Erica: one where she was using the shared laughter system, one where she didn’t laugh at all, and one where she always laughed whenever she heard someone else do it.

They then gave the interaction scores for empathy, naturalness, similarity to humans, and understanding.

The researchers found that the shared-laughter system scored higher than either baseline.

While they’re pleased with this result, the researchers say that their system is still quite rudimentary: they need to categorise and examine lots of other types of laughter before Erica’s chuckling naturally.

“There are many other laughing functions and types which need to be considered, and this is not an easy task. We haven’t even attempted to model unshared laughs even though they are the most common,” says Inoue.

Plus, it doesn’t matter how realistic a robot’s laugh is if the rest of its conversation is unnatural.

“Robots should actually have a distinct character, and we think that they can show this through their conversational behaviours, such as laughing, eye gaze, gestures and speaking style,” says Inoue.

“We do not think this is an easy problem at all, and it may well take more than 10 to 20 years before we can finally have a casual chat with a robot like we would with a friend.”

 

  • Upvote 1
Link to comment
Share on other sites

  • Subscriber
Quote

Lucrative Breakthrough Prizes go to Alphafold developers – among others

Proteins sit at the heart of three of the five $3 million prizes.

A.png

A pair of AI researchers behind the protein-folding program Alphafold has received one of five $3 million (US) “Breakthrough Prizes”.

The Breakthrough Prizes were founded a decade ago by a group of internet entrepreneurs including Yuri Milner, a Russian-Israeli billionaire, and Mark Zuckerberg, the American chief executive of Meta.

They reward five annual awards for researchers who have made “game-changing discoveries” in three fields: Life Sciences, Fundamental Physics, and Mathematics.

They’re considered the most lucrative awards in science.

UK-based AI researchers Demis Hassabis and John Jumper, the brains behind Deepmind’s Alphafold, are among the laureates for the Life Sciences prize.

Alphafold has predicted the structures for almost all proteins known to science. Understanding how proteins fold is a notoriously difficult problem, and Alphafold represents a colossal leap forward in the field.

Two other Life Sciences prizes have been awarded, one to Clifford Brangwynne, based at Princeton University in the US, and Anthony Hyman, based at the Max Planck Institute in Germany, for their discovery of a new process in cellular dynamics. They’ve shown how proteins and other biomolecules in cells can interact with each other without a cell membrane present.

Another has gone to Emmanuel Mignot, from Stanford University in the  US, and Masashi Yanagisawa, at Japan’s University of Tsukuba, for their work on narcolepsy – a disorder that features chronic attacks of drowsiness..

Mignot and Yanagisawa both discovered separately that a protein called orexin, which normally regulates wakefulness, plays a key role in narcolepsy. When the immune system attacks the cells that make orexin, people can develop neurodegenerative narcolepsy. This work has spurred treatments which relieve narcoleptic symptoms.

The Breakthrough Prize in Fundamental Physics went to an international group of four researchers for “foundational work in the field of quantum information”, while the prize in mathematics went to US mathematician Daniel Spielman for “contributions to theoretical computer science and mathematics”.

?id=214743&title=Lucrative+Breakthrough+https://cosmosmagazine.com/science/breakthrough-prizes-2022/

 

 

Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites

  • Subscriber

DALL·E Now Available Without Waitlist (openai.com)

DALL-E is going to change a lot of things very quickly. I can see the nutjobs who run NFTs using this to generate all kinds of content to boost that market and with each turn the AI gets smarter so it starts to mimic more of what its seen thereby generating more art and potentially fake-images too. 

Link to comment
Share on other sites

  • Subscriber

Boston Dynamics, along with other 5 robotics companies (Agility Robotics, ANYbotics, Clearpath Robotics, Open Robotics and Unitree Robotics) are pledging not to support weaponisation of their products.

https://www.axios.com/2022/10/06/boston-dynamics-pledges-weaponize-robots

I guess that won't stop them from selling robots to military, police, security forces, and anyone else who will then be able to weaponise them themselves... Don't think they are going to turn those DOD contracts down.

  • Upvote 1
Link to comment
Share on other sites

  • Subscriber
Quote

New AI tool might help in rare disease diagnosis and treatment prediction

The AI tool acts like a search engine for pathology images.

A.png

A new self-teaching artificial intelligence (AI) tool has been developed to aid in the diagnosis and treatment prediction in rare diseases.

By virtue of scant information on them, rare diseases present a challenge for clinicians both in their identification and in determining the best course of treatment.

Artificial intelligence, or machine-learning tools, are emerging as useful in assisting the experience and critical thinking of human clinicians. AI tools have been shown to be effective in diagnosis, prognosis and treatment predictions in epilepsy studies, looking for potentially cancerous lung nodules and the care of patients with traumatic brain injuries.

There are understandable concerns, however, about the implementation of AI in medicine because  AI is only as good as the data that it is fed – so it may reflect certain biases in demographics. It’s also very new and not well legislated.

But AI experts and clinicians alike are confident that machine-learning algorithms will play a role in patient care in the near future, not as replacements for human doctors, but by complementing human experience and wisdom to maximise our ability to help patients.



Researchers from the machine learning-focused Mahmood Lab at Brigham and Women’s Hospital in Boston, Massachusetts, have developed a deep learning algorithm which can teach itself how to identify similar features in large pathology image repositories to help diagnose and generate treatment guides for rare diseases. Their results are published in the journal Nature Biomedical Engineering.

Known as SISH (Self-Supervised Image search for Histology), the new tool is a type of “self-teaching” algorithm. At its most basic, deep learning attempts to imitate the complex neural networks in our own brains through algorithms. These algorithms can then “learn” things about data sets by finding patterns and trends, much like we do in our daily experience.

SISH acts like a search engine for pathology images. Among its many potential applications, it has proven adept at identifying rare diseases and helping clinicians determine which patients are likely to respond to certain therapies.

“We show that our system can assist with the diagnosis of rare diseases and find cases with similar morphologic patterns without the need for manual annotations and large datasets for supervised training,” says senior author Dr Faisal Mahmood, from the Brigham’s Department of Pathology. “This system has the potential to improve pathology training, disease subtyping, tumour identification, and rare morphology identification.”

Given AI has been around for a little bit now, it’s not surprising that other tools have been tested for these kinds of uses.

Fundamentally, they all rely on the data – and modern electronic databases can store an immense number of digital records. A significant proportion of this data comes in the form of images. In pathology, these are particularly whole slide images (WSIs) which are the complete scans of a microscope slide, creating a single high-resolution digital file.

However, these high-fidelity images can be large files. As more of them fill digital repositories, searching through WSI databases can be time consuming, and computationally complex and expensive.

Brigham researchers overcame this issue with SISH which teaches itself to recognise features and find analogous cases in databases at a constant speed regardless of the size of the database.

The pathologists and AI experts tested the speed and ability of SISH to correctly retrieve information for both common and rare cancers.

SISH was able to successfully obtain images accurately and at high speed from a database of tens of thousands of WSIs from 22,000 patient cases. Over 50 different disease type and more than a dozen anatomical sites were represented in the data set.

The new algorithm outperformed other methods in many scenarios, including in identifying disease subtype. Of particular importance, was SISH’s ability to maintain a constant search speed even as the databases expanded in size, and when using diverse data sets.



A drawback, however, is that SISH does require a huge amount of memory. The new tool also has limited context awareness when it comes to WSIs in large tissues, and it is currently only useful in identifying single images.

But the researchers are confident their new tool represents a development in the proficiency of rare disease diagnosis and analysis.

“As the sizes of image databases continue to grow, we hope that SISH will be useful in making identification of diseases easier,” said Mahmood. “We believe one important future direction in this area is multimodal case retrieval which involves jointly using pathology, radiology, genomic and electronic medical record data to find similar patient cases.”

https://cosmosmagazine.com/health/ai-diagnosis-rare-disease/

 

Link to comment
Share on other sites

  • Subscriber
Quote

AI recruitment tools are “automated pseudoscience” says Cambridge researchers

And they created a website to show you how.

download.png

AI is set to bring in a whole new world in a huge range of industries. Everything from art to medicine is being overhauled by machine learning.

But researchers from the University of Cambridge have published a paper in Philosophy & Technology to call out AI used to recruit people for jobs and boost workplace diversity – going so far as to call them an “automated pseudoscience”.

“We are concerned that some vendors are wrapping ‘snake oil’ products in a shiny package and selling them to unsuspecting customers,” said co-author Dr Eleanor Drage, a researcher in AI ethics.

“By claiming that racism, sexism and other forms of discrimination can be stripped away from the hiring process using artificial intelligence, these companies reduce race and gender down to insignificant data points, rather than systems of power that shape how we move through the world.”

Recent years have seen the emergence of AI tools marketed as an answer to lack of diversity in the workforce. This can be anything from use of chatbots and resume scrapers, to line up prospective candidates, through to analysis software for video interviews.

Those behind the technology claim it cancels out human biases against gender and ethnicity during recruitment, instead using algorithms that read vocabulary, speech patterns, and even facial micro-expressions, to assess huge pools of job applicants for the right personality type and ‘culture fit’.

But AI isn’t very good at removing human biases. To train a machine-learning algorithm, you have to first put in lots and lots of past data. In the past for example, AI tools have discounted women all together in fields where more men were traditionally hired. In a system created by Amazon, resumes were discounted if they included the word ‘women’s’ – like in a “women’s debating team” and downgraded graduates of two all-women colleges. Similar problems occur with race.



The Cambridge researchers suggest that even if you remove ‘gender’ or ‘race’ as distinct categories, the use of AI may ultimately increase uniformity in the workforce. This is because the technology is calibrated to search for the employer’s fantasy ‘ideal candidate’, which is likely based on demographically exclusive past results.

The researchers actually went a step further, and worked with a team of Cambridge computer science undergraduates, to build an AI tool modelled on the technology. You can check it out here.

The tool demonstrates how arbitrary changes in facial expression, clothing, lighting and background can give radically different personality readings – and so could make the difference between rejection and progression.

“While companies may not be acting in bad faith, there is little accountability for how these products are built or tested,” said Drage.

“As such, this technology, and the way it is marketed, could end up as dangerous sources of misinformation about how recruitment can be ‘de-biased’ and made fairer.”

The researchers suggest that these programs are a dangerous example of ‘technosolutionism’: turning to technology to provide quick fixes for deep-rooted discrimination issues that require investment and changes to company culture.

“Industry practitioners developing hiring AI technologies must shift from trying to correct individualized instances of ’bias’ to considering the broader inequalities that shape recruitment processes,” the team write in their paper.

“This requires abandoning the ‘veneer of objectivity’ that is grafted onto AI systems, so that technologists can better understand their implication — and that of the corporations within which they work — in the hiring process.”

?id=218666&title=AI+recruitment+tools+arhttps://cosmosmagazine.com/technology/ai-recruitment-tools-diversity-cambridge-automated-pseudoscience/

 

Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites

  • Subscriber
Quote

Space-walking robot using innovative and dextrous mechanics could transform how we build things off world

Conditions in space are extreme but technology needs to be built and maintained.

download.png

A new robot, dubbed “E-walker,” with greater dexterity than any before, has been designed to help in construction projects such as the assembly of telescopes and maintenance of space stations in orbit.

More than ever before, as humanity spreads our collective wings, we need to build and maintain facilities in space. Off world, conditions are pretty extreme, and our technology has a limited lifespan due to cosmic rays and solar storms from the sun.

Extravehicular activities (endeavours performed by astronauts outside their spacecraft), robotics and autonomous systems already play a role in servicing and maintenance missionsincluding manufacturing, assembly, maintenance, astronomical and Earth observations, and debris removal.

But advances in robotics and autonomous systems are being pushed in the hope of reducing the risk to humans.

See, we’re not very well built for space. Not being able to breathe without very expensive machinery strapped to a very expensive spacesuit is a problem to begin with. Add to that the harmful rays, lack of gravity, isolation, and any number of things which can go wrong (just watch the film Gravity…) and you have yourself a pretty stressful construction site.


A team of engineers has designed a state-of-the-art ‘walking robot’ that could transform the way we do large construction projects in space. They tested their robot for the building in space of a 25-metre Large Aperture Space Telescope (LAST). Their findings are published in the Frontiers of Robotics and AI journal.

“We need to introduce sustainable, futuristic technology to support the current and growing orbital ecosystem,” explains corresponding author Manu Nair, a PhD candidate at the University of Lincoln in the UK. “As the scale of space missions grows, there is a need for more extensive infrastructures in orbit. Assembly missions in space would hold one of the key responsibilities in meeting the increasing demand.”

Space construction has become especially topical with the launch of the Hubble Space Telescope and its successor the James Webb Space Telescope. The things we’re putting into space (which require in-space assembly and maintenance) are getting bigger and more complex.

Large telescopes like these can’t be assembled on Earth and launched due to their size. They have to be assembled in space.


“The prospect of in-orbit commissioning of a LAST has fuelled scientific and commercial interests in deep-space astronomy and Earth observation,” says Nair. “Although conventional space walking robotic candidates are dexterous, they are constrained in manoeuvrability. Therefore, it is significant for future in-orbit walking robot designs to incorporate mobility features to offer access to a much larger workspace without compromising the dexterity.”

Nair’s team has proposed a robot with seven degrees of freedom, coming from seven joints in its limbs. The robot can move along a surface to perform different task and has been dubbed an “E-Walker”.

Simulations of the E-Walker’s capabilities were compared to the existing Canadarm2 and European Robotic Arm on the International Space Station.

“Our analysis shows that the proposed innovative E-Walker design proves to be versatile and an ideal candidate for future in-orbit missions. The E-Walker would be able to extend the life cycle of a mission by carrying out routine maintenance and servicing missions post assembly, in space” Nair says.

A scaled-down prototype for Earth-based tests has also been developed.

“The analysis of the prototype identifies it to be an ideal candidate for servicing, maintenance, and assembly operations on Earth, such as carrying out regular maintenance checks on wind turbines,” Nair says.

Having done the design engineering analysis, the next steps include verifying whether the robot actually works in a space build.

The E-Walker prototyping work is now in progress at the University of Lincoln which hopes the experimental verification and validation will be published separately.

?id=219184&title=Space-walking+robot+usihttps://cosmosmagazine.com/space/walking-robot-space-construction/

 

Link to comment
Share on other sites

  • 1 month later...
  • Subscriber
Quote

AI may have solved a debate on whether a dinoprint was from a herbivore or meat eater

It was better than palaeontologists at guessing the correct one

GettyImages-578262420.jpg

Covered footprints of dinosaurs stampeding some 95 million years ago. Nearly 4000 dinosaur footprints extend over 210 sq m. Dinosaur Stampede National monument, Lark Quarry , central west Queensland, Australia. Credit: Auscape/Universal Images Group via Getty Images

An international team of researchers has, for the first time, used AI to analyse the tracks of dinosaurs, and the AI has come out on top – beating trained palaeontologists at their own game.

“In extreme examples of theropod and ornithopod footprints, their footprint shapes are easy to tell apart -theropod with long, narrow toes and ornithopods with short, dumpy toes. But it is the tracks that are in-between these shapes that are not so clear cut in terms of who made them,” one of the researchers, University of Queensland palaeontologist Dr Anthony Romilio, told Cosmos.

“We wanted to see if AI could learn these differences and, if so, then could be tested in distinguishing more challenging three-toed footprints.”

Theropods are meat eating dinosaurs, while ornithopods are plant eating, and getting this analysis wrong can alter the data which shows diversity and abundance of dinosaurs in the area, or could even change what we think are the behaviours of certain dinos.

One set of dinosaur prints in particular had been a struggle for the researchers to analyse. Large footprints at the Dinosaur Stampede National monument in Queensland had divided Romilio and his colleagues. The mysterious tracks were thought to be left during the mid-Cretaceous Period, around 93 million years ago, and could have been from either a meat eating theropod or a plant eating ornithopod.

“I consider them footprints of a plant-eater while my colleagues share the much wider consensus that they are theropod tracks.”

So, an AI called a Convolutional Neutral Network, was brought in to be a deciding factor.

“We were pretty stuck, so thank god for modern technology,” says Dr Jens Lallensack, lead author from Liverpool John Moores University in the UK.

“In our research team of three, one person was pro-meat-eater, one person was undecided, and one was pro-plant-eater.

“So – to really check our science – we decided to go to five experts for clarification, plus use AI.”

The AI was given nearly 1,500 already known tracks to learn which dinosaurs were which. The tracks were simple line drawings to make it easier for the AI to analyse.

Then they began testing. Firstly, 36 new tracks were given to a team of experts, the AI and the researchers.

“Each of us had to sort these into the categories of footprints left by meat-eaters and those by plant-eaters,” says Romilio.

“In this the ai was the clear winner with 90% correctly identified. Me and one of my colleagues came next with ~75% correct.”

Then, they went for the crown jewel – the Dinosaur Stampede National monument tracks. When the AI analysed this it came back with a pretty strong result that they’re plant eating ornithopod tracks. It’s not entirely sure though, the data suggests that there’s a 1 in 5,000,000 chance it could be a theropod instead.

This is still early days for using AI in this way. In the future. the researchers are hoping for funding for a FrogID style app which anyone could use to analyse dinosaur tracks.

“Our hope is to develop an app so anyone can take a photo on their smartphone, use the app and it will tell you what type of dinosaur track it is,” says Romilio.

“It will also be useful for drone work survey for dinosaur tracksites, collecting and analysing image data and identifying fossil footprints remotely.” The paper has been published in the Royal Society Interface.

 

Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites

  • Subscriber
Quote

The fastest underwater soft robot yet: Butterfly bot

The fastest ever underwater soft robot: Butterfly bot

download.png

Marine “soft robots” could help with deep ocean exploration, and cleaning up pollution such as  microplastics.

Robots made from soft material can get into places, and do tasks, that traditional, hard robot’s can’t, particularly underwater.

But they’re currently pretty slow swimmers.

Now a team of US researchers has made a soft robot that’s nearly four times faster than any other reported soft swimmer.

The researchers, who have published a description of their robot in Science Advances, based their robot design on manta rays.

“To date, swimming soft robots have not been able to swim faster than one body length per second, but marine animals – such as manta rays – are able to swim much faster, and much more efficiently,” says co-author Jie Yin, an associate professor of mechanical and aerospace engineering at North Carolina State University, US.

“We wanted to draw on the biomechanics of these animals to see if we could develop faster, more energy-efficient soft robots. The prototypes we’ve developed work exceptionally well.”

The researchers developed two types of robot, both called “butterfly bots” because the way they move looks like someone swimming with butterfly stroke.

One butterfly bot was fast, zipping through the water at 3.74 body lengths per second. The other can get to 1.7 body lengths per second but is more manoeuvrable, making sharp turns.

“Researchers who study aerodynamics and biomechanics use something called a Strouhal number to assess the energy efficiency of flying and swimming animals,” says first author Dr Yinding Chi, a recent PhD graduate of North Carolina State University.

“Peak propulsive efficiency occurs when an animal swims or flies with a Strouhal number of between 0.2 and 0.4. Both of our butterfly bots had Strouhal numbers in this range.”

The butterfly bots have soft, silicone bodies, with “bistable” wings attached. These wings can snap comfortably from one position to another and back again.

As air chambers in the butterfly bot’s body fill and deflate, the silicone bends and the wings are snapped up and down. This makes them flap, and propels the robot forward.

“Most previous attempts to develop flapping robots have focused on using motors to provide power directly to the wings,” says Yin.

“Our approach uses bistable wings that are passively driven by moving the central body. This is an important distinction, because it allows for a simplified design, which lowers the weight.”

The manoeuvrable bot has a drive unit connected to each wing, meaning one wing can flap at a time and the robot can turn.

The faster bot has just the one unit moving both wings, sacrificing agility for a lighter weight – and more speed.

“This work is an exciting proof of concept, but it has limitations,” says Yin.

“Most obviously, the current prototypes are tethered by slender tubing, which is what we use to pump air into the central bodies. We’re currently working to develop an untethered, autonomous version.”

https://cosmosmagazine.com/technology/butterfly-bot-soft-robot/

 

Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites

  • Subscriber
Quote

We’re in the largest boom in the application and development of AI for science in history.

Landmark CSIRO report assesses the impact of artificial intelligence on science and its future.

A.png

A world-first report from Australia’s science agency, CSIRO, has found that scientists are adopting artificial intelligence (AI) at an unprecedented rate.

Analysing the impact of AI on scientific discovery, ‘Artificial intelligence for science’ draws insight from millions of peer-reviewed scientific papers published over 63 years and identifies key issues ahead for the sector.

The report found that artificial intelligence is now implemented in 98 per cent of scientific fields, and by September 2022 approximately 5.7% of all peer-reviewed research worldwide was on the topic.

“AI is no longer just the domain of computer scientists or mathematicians; it is now a significant enabling force across all fields of science, which is something we live every day at CSIRO, where digital technologies are accelerating the pace and scale of our research in fields ranging from agriculture to energy to manufacturing and beyond,” says CSIRO Chief Scientist Professor Bronwyn Fox.

AI in science has grown significantly since 1960

The report uses a bibliometric analysis – statistical methods analysing trends in peer-reviewed research – to determine what percentage of the 333 research fields studied were publishing on artificial intelligence between 1960-2022.

Analysing all disciplines of natural science, physical science, social science and the arts and humanities, the report found that only 14% of fields were publishing on artificial intelligence in 1960. Just over a decade later in 1972 that proportion had reached more than half, and at present sits at 98%.

B.png

Growth in AI publishing has been greatest in the past 5-6 years, with the relative share of AI publishing rising from 2.9% of all publications in 2016 to 5.7% of all publications in 2022. Among the most prolific adopters are the fields of mathematics, decision sciences, engineering, neuroscience and health professions.

“Human curiosity will always be at the heart of science, but these technologies, combined with deep domain understanding, are increasingly helping to open-up new frontiers for knowledge discovery,” says Fox.

“AI is also helping to deliver higher-impact, real-world solutions to Australia’s greatest challenges, like AI to help detect disease, predict bushfires and manage the enormous amount of data we are gathering about our universe.”

And there are no apparent signs of this current boom slowing down.

So, what does the future hold for artificial intelligence?

However, according to the report the pathway to artificial intelligence adoption and capability uplift is challenging; searchers are likely to experience both success and failure as they develop AI systems within their domains of expertise.

The report identifies six future development pathways for researchers and research organisations seeking to upgrade their AI capability for the future – harnessing the benefits while mitigating the associated risks.

  1. Software and hardware upgrades. Purpose-built processors designed for machine learning are speeding up computations, while quantum computing could lead to transformative leaps in computational power.
  2. The quest for better data. The era of “big data” may be transitioning into the era of better data. Recent breakthroughs have been achieved using smaller datasets that are well-curated, fit-for-purpose and provenance assured.
  3. Education, training and capability uplift. Between 2017-2020 the number of university courses teaching AI increased by 103%. Research organisations can take advantage of this to recruit AI talent and uplift capabilities of existing staff.
  4. Toward human centric artificial intelligence. In the vast majority of cases AI will be augmenting, not replacing, the human scientist. Issues of trust, transparency and reliability will be important for scientists and reviewers working on AI systems.
  5. Improving workforce diversity. Improving the gender, ethnic and cultural diversity of the AI research workforce will lead to better science outcomes.
  6. Ethical AI. Research organisations will be challenged to develop capabilities, technologies and cultures that deliver increasingly ethical AI.

“To make the most of this technology for Australia, there are key issues we will need to tackle. CSIRO has one of the largest teams of digital experts in the country, but these are not issues that can be solved by one organisation alone,” says Fox.

“The development of trusted, responsible and ethical AI solutions will be increasingly important globally, and because we have moved quickly to build deep expertise in the field, Australia has a unique opportunity to lead in this area.”

https://cosmosmagazine.com/technology/artificial-intelligence-science/

 

 

Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites

  • 2 weeks later...
  • Moderator
40 minutes ago, nudge said:

https://arstechnica.com/information-technology/2022/12/thanks-to-ai-its-probably-time-to-take-your-photos-off-the-internet/

This is quite an interesting read and experiment... The output is not perfect, but it's scary how easy it is to make something like that.

@Mel81x

That is scary, and yet another reason not to plaster your pictures all over the internet. 

  • Upvote 1
Link to comment
Share on other sites

  • Subscriber
Just now, Tommy said:

That is scary, and yet another reason not to plaster your pictures all over the internet. 

Yep. And the technology will only get better. I have a feeling it will become quite difficult to distinguish between a genuine image/video and an AI generated fakes in the future, which will be a proper nightmare in terms of misinformation, propaganda, and choosing what to believe. 

  • Upvote 1
Link to comment
Share on other sites

  • Subscriber
10 hours ago, nudge said:

https://arstechnica.com/information-technology/2022/12/thanks-to-ai-its-probably-time-to-take-your-photos-off-the-internet/

This is quite an interesting read and experiment... The output is not perfect, but it's scary how easy it is to make something like that.

@Mel81x

Was always coming. After the AI learns it will make a photoshop duel a non-content between itself and another human. I am going to love what it does when a hacking group sets it loose with no restrictions.

  • Upvote 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

football forum
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

  • Recently Browsing   0 members

    No registered users viewing this page.

×
×
  • Create New...