Jump to content
  • Welcome!

    Tired of Kevins and Karens on Facebook and other social media platforms?

    Our forum is completely focused on football and moderated to keep discussions civil. Sound good? Sign up!

Artificial Intelligence & Robotics

football forum

Recommended Posts



It isn’t always the best idea to design robots that look like us – just ask R2-D2

Humanoids have long been regarded as the ‘holy grail’ but there’s room for a lot more diversity in the robot design world.

In August of 2021, Tesla CEO Elon Musk revealed his company’s plan to produce a humanoid robot called 'Optimus'. The announcement, accompanied by a dancing actor in a skintight robot costume, was ridiculed by sceptics, but Tesla is not alone in the quest to create human-shaped machines.

Research labs and companies around the globe are investing serious amounts of money in these complex robots, for uses from warehouses to nursing homes. Their pitch is two-fold: the human form is relatable, and it fits seamlessly into our existing infrastructure. These reasons are sensible, but there might be a better way forward.

Humanoids, sometimes called the 'holy grail' of robotics, are robots with a torso, two legs, two arms, and a head. Google image search the word 'robot', and your browser will fill with pictures of humanoids.

Some robot creators argue that the autonomous machines we’ll interact with in the future need to look like us, because we relate best to other humans. I've found in my own research that we connect emotionally with machines onto which we can project our own traits, but the notion that we most relate to a human shape may be missing something.

People name their robot vacuum. Would they like it better if it was a humanoid that walked around vacuuming? Probably not. When something has a human form, we have very different expectations for its intelligence and how it will perform. Our current robots don’t live up to those expectations, nor will we have a robot butler anytime soon.

Fortunately for robot designers, things don’t need to be humanoid for us to relate to them - they just need to mimic cues that we recognise. Artists and animators have honed the art of capturing our emotional expression and then putting it into other shapes - think Bambi, Cars, or the Pixar lamp.

Japanese cities have attracted visitors with mascots like Kumamon, a simple, cute bear character created by the government of Kumamoto Prefecture to boost tourism. And thousands of Star Wars fans love R2-D2, who is basically a tin can on wheels and is yet more emotionally compelling than most humanoids.

Avoiding the human form when designing robots not only avoids the problem of expectation management, it can also side-step a minefield of social injustices. For example, researchers from the University of Bielefeld, Germany, found that people will rate a humanoid robot with long hair better suited for stereotypical female tasks like household and care work and less suitable for doing technical repairs, compared to an identical robot with short hair.

Previous studies have also shown that artificial agents with human faces are seen as more attractive, trustworthy, persuasive, and intelligent when they’re of the same ethnicity as the person rating them. Technology that looks too human not only reflects our biases, it can also entrench and perpetuate harmful stereotypes.

There’s also the logistical argument for humanoid robots: we live in a world built for humans, with stairs and door knobs and narrow passageways. In order to navigate these spaces, some designers say, we need robots that are built like us. It’s certainly true that many spaces are difficult to navigate with wide bodies, wheels, or treads. But as we look to the future, maybe there are better choices.

Robots can be smaller than the eye can see, or larger than a house. They can be encased in metals or soft materials. They can roll, climb, slither, jump, or ooze from point A to B. According to roboticist Prof Robin Murphy of Texas A&M University, the best form is “whatever shape gets the job done.”

Ironically, challenging the notion that the 'right way' to do things is the way humans do them creates opportunities to reach broader human demographics, specifically people who often find themselves left out of the design process entirely.

Laurel Riek, a roboticist at the University of California, San Diego, has pointed out that rather than throwing our funds into expensive, difficult-to-engineer, bipedal humanoid robots, we could invest in making infrastructure more friendly for wheelchairs, walkers, and strollers. If a wheelchair can access a space, so can a simple and efficient robot on wheels.

Human beings are far from uniform in our construction and abilities, and if we design our world to reflect that, we could radically increase accessibility at the same time as developing better, cheaper robots with a greater range of skills. And besides, why take the stairs when you can climb the walls?

While there will always be some use cases for humanoid robots, we may be able to do better for work, companionship, and society than replicating what we already have. As robots move from behind factory walls into shared spaces, robotics provides us a wonderful opportunity: it allows us to think outside of ourselves, not only in designing our robots, but also in our ideas for how to construct a better world.



Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites

  • 4 weeks later...

Existing computer simulations are not yet powerful enough to harness AlphaFold for drug discovery

Models can’t accurately predict the interactions of many protein structures and molecules at the same time.


Scientists want to use computer models to help reduce the cost and time associated with drug discovery, to develop new antibiotics to fight the growing crisis of antimicrobial resistance. But a new study shows that using the latest tools together are little better than guesswork at the moment.

This is a barrier to drug development – at least as the computer models exist now – according to the study a new study published in Molecular Systems Biology.

Researchers from Massachusetts Institute of Technology (MIT) explored whether existing computer programs could accurately predict the interactions between antibacterial compounds and bacterial protein structures generated by Google’s new tool called AlphaFold – an artificial intelligence program that generates 3D protein structures from their amino acid sequence.

AlphaFold is exciting the science world.

But the MIT team found that the predictions of existing models, called molecular docking simulations, performed little better than chance.

“Breakthroughs such as AlphaFold are expanding the possibilities for in silico (ie by computers) drug discovery efforts, but these developments need to be coupled with additional advances in other aspects of modelling that are part of drug discovery efforts,” says senior author James Collins, professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering.

“Our study speaks to both the current abilities and the current limitations of computational platforms for drug discovery.”

The hope is that scientists could use modelling to perform large-scale screening for new compounds that affect previously untargeted bacterial proteins. The end result being the development of new antibiotics that work in unprecedented ways.

The team studied the interactions of 296 essential proteins from Escherichia coli with 218 antibacterial compounds, using molecular docking simulations that predict how strongly two molecules will bind together based on their shapes and physical properties.

Previously, these simulations have been used successfully to screen large numbers of compounds against a single protein target to identify compounds that bind the best. But here, the predictions became much less accurate when attempting to screen many compounds against many potential protein targets.

In fact, the model produced false positive rates similar to true positive rates when simulating interactions between existing drugs and their targets.

“Utilising these standard molecular docking simulations, we obtained an auROC value of roughly 0.5, which basically says you’re doing no better than if you were randomly guessing,” Collins explains.


But this wasn’t due to some fault of AlphaFold, as similar results occurred when they used the same modelling approach with protein structures that had been experimentally determined in the lab.

“AlphaFold appears to do roughly as well as experimentally determined structures, but we need to do a better job with molecular docking models if we’re going to utilise AlphaFold effectively and extensively in drug discovery,” adds Collins.

One explanation for this poor performance is that the protein structures fed into the model are static, but in real biological systems proteins are flexible and often shift their configurations.

The researchers were able to improve the performance of the molecular docking simulations by running them through four additional machine-learning models trained on data that describe how proteins and other molecules interact with each other.



Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites


Realistic androids coming closer, as scientists teach a robot to share your laughter

Researchers think it will be only 10 to 20 years before we can finally have a casual chat with a robot like we would with a friend.


Do you ever laugh at an inappropriate moment?

A team of Japanese researchers has taught a robot when to laugh in social situations, which is a major step towards creating an android that will be “like a friend.”

“We think that one of the important functions of conversational AI is empathy,” says Dr Koji Inoue, an assistant professor at Kyoto University’s Graduate School of Informatics, and lead author on a paper describing the research, published in Frontiers in Robotics and AI.

“Conversation is, of course, multimodal, not just responding correctly. So we decided that one way a robot can empathize with users is to share their laughter, which you cannot do with a text-based chatbot.”

The researchers trained an AI with data from 80 speed dating dialogues, from a matchmaking marathon with Kyoto University students. (Imagine meeting a future partner at exercise designed to teach a robot to laugh…)

“Our biggest challenge in this work was identifying the actual cases of shared laughter, which isn’t easy, because as you know, most laughter is actually not shared at all,” says Inoue.

“We had to carefully categorise exactly which laughs we could use for our analysis and not just assume that any laugh can be responded to.”

They then added this system to a hyper-realistic android named Erica, and tested the robot on 132 volunteers.

Participants listened to one of three different types of dialogue with Erica: one where she was using the shared laughter system, one where she didn’t laugh at all, and one where she always laughed whenever she heard someone else do it.

They then gave the interaction scores for empathy, naturalness, similarity to humans, and understanding.

The researchers found that the shared-laughter system scored higher than either baseline.

While they’re pleased with this result, the researchers say that their system is still quite rudimentary: they need to categorise and examine lots of other types of laughter before Erica’s chuckling naturally.

“There are many other laughing functions and types which need to be considered, and this is not an easy task. We haven’t even attempted to model unshared laughs even though they are the most common,” says Inoue.

Plus, it doesn’t matter how realistic a robot’s laugh is if the rest of its conversation is unnatural.

“Robots should actually have a distinct character, and we think that they can show this through their conversational behaviours, such as laughing, eye gaze, gestures and speaking style,” says Inoue.

“We do not think this is an easy problem at all, and it may well take more than 10 to 20 years before we can finally have a casual chat with a robot like we would with a friend.”


  • Upvote 1
Link to comment
Share on other sites


Lucrative Breakthrough Prizes go to Alphafold developers – among others

Proteins sit at the heart of three of the five $3 million prizes.


A pair of AI researchers behind the protein-folding program Alphafold has received one of five $3 million (US) “Breakthrough Prizes”.

The Breakthrough Prizes were founded a decade ago by a group of internet entrepreneurs including Yuri Milner, a Russian-Israeli billionaire, and Mark Zuckerberg, the American chief executive of Meta.

They reward five annual awards for researchers who have made “game-changing discoveries” in three fields: Life Sciences, Fundamental Physics, and Mathematics.

They’re considered the most lucrative awards in science.

UK-based AI researchers Demis Hassabis and John Jumper, the brains behind Deepmind’s Alphafold, are among the laureates for the Life Sciences prize.

Alphafold has predicted the structures for almost all proteins known to science. Understanding how proteins fold is a notoriously difficult problem, and Alphafold represents a colossal leap forward in the field.

Two other Life Sciences prizes have been awarded, one to Clifford Brangwynne, based at Princeton University in the US, and Anthony Hyman, based at the Max Planck Institute in Germany, for their discovery of a new process in cellular dynamics. They’ve shown how proteins and other biomolecules in cells can interact with each other without a cell membrane present.

Another has gone to Emmanuel Mignot, from Stanford University in the  US, and Masashi Yanagisawa, at Japan’s University of Tsukuba, for their work on narcolepsy – a disorder that features chronic attacks of drowsiness..

Mignot and Yanagisawa both discovered separately that a protein called orexin, which normally regulates wakefulness, plays a key role in narcolepsy. When the immune system attacks the cells that make orexin, people can develop neurodegenerative narcolepsy. This work has spurred treatments which relieve narcoleptic symptoms.

The Breakthrough Prize in Fundamental Physics went to an international group of four researchers for “foundational work in the field of quantum information”, while the prize in mathematics went to US mathematician Daniel Spielman for “contributions to theoretical computer science and mathematics”.




Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites

DALL·E Now Available Without Waitlist (openai.com)

DALL-E is going to change a lot of things very quickly. I can see the nutjobs who run NFTs using this to generate all kinds of content to boost that market and with each turn the AI gets smarter so it starts to mimic more of what its seen thereby generating more art and potentially fake-images too. 

Link to comment
Share on other sites

Boston Dynamics, along with other 5 robotics companies (Agility Robotics, ANYbotics, Clearpath Robotics, Open Robotics and Unitree Robotics) are pledging not to support weaponisation of their products.


I guess that won't stop them from selling robots to military, police, security forces, and anyone else who will then be able to weaponise them themselves... Don't think they are going to turn those DOD contracts down.

  • Upvote 1
Link to comment
Share on other sites


New AI tool might help in rare disease diagnosis and treatment prediction

The AI tool acts like a search engine for pathology images.


A new self-teaching artificial intelligence (AI) tool has been developed to aid in the diagnosis and treatment prediction in rare diseases.

By virtue of scant information on them, rare diseases present a challenge for clinicians both in their identification and in determining the best course of treatment.

Artificial intelligence, or machine-learning tools, are emerging as useful in assisting the experience and critical thinking of human clinicians. AI tools have been shown to be effective in diagnosis, prognosis and treatment predictions in epilepsy studies, looking for potentially cancerous lung nodules and the care of patients with traumatic brain injuries.

There are understandable concerns, however, about the implementation of AI in medicine because  AI is only as good as the data that it is fed – so it may reflect certain biases in demographics. It’s also very new and not well legislated.

But AI experts and clinicians alike are confident that machine-learning algorithms will play a role in patient care in the near future, not as replacements for human doctors, but by complementing human experience and wisdom to maximise our ability to help patients.

Researchers from the machine learning-focused Mahmood Lab at Brigham and Women’s Hospital in Boston, Massachusetts, have developed a deep learning algorithm which can teach itself how to identify similar features in large pathology image repositories to help diagnose and generate treatment guides for rare diseases. Their results are published in the journal Nature Biomedical Engineering.

Known as SISH (Self-Supervised Image search for Histology), the new tool is a type of “self-teaching” algorithm. At its most basic, deep learning attempts to imitate the complex neural networks in our own brains through algorithms. These algorithms can then “learn” things about data sets by finding patterns and trends, much like we do in our daily experience.

SISH acts like a search engine for pathology images. Among its many potential applications, it has proven adept at identifying rare diseases and helping clinicians determine which patients are likely to respond to certain therapies.

“We show that our system can assist with the diagnosis of rare diseases and find cases with similar morphologic patterns without the need for manual annotations and large datasets for supervised training,” says senior author Dr Faisal Mahmood, from the Brigham’s Department of Pathology. “This system has the potential to improve pathology training, disease subtyping, tumour identification, and rare morphology identification.”

Given AI has been around for a little bit now, it’s not surprising that other tools have been tested for these kinds of uses.

Fundamentally, they all rely on the data – and modern electronic databases can store an immense number of digital records. A significant proportion of this data comes in the form of images. In pathology, these are particularly whole slide images (WSIs) which are the complete scans of a microscope slide, creating a single high-resolution digital file.

However, these high-fidelity images can be large files. As more of them fill digital repositories, searching through WSI databases can be time consuming, and computationally complex and expensive.

Brigham researchers overcame this issue with SISH which teaches itself to recognise features and find analogous cases in databases at a constant speed regardless of the size of the database.

The pathologists and AI experts tested the speed and ability of SISH to correctly retrieve information for both common and rare cancers.

SISH was able to successfully obtain images accurately and at high speed from a database of tens of thousands of WSIs from 22,000 patient cases. Over 50 different disease type and more than a dozen anatomical sites were represented in the data set.

The new algorithm outperformed other methods in many scenarios, including in identifying disease subtype. Of particular importance, was SISH’s ability to maintain a constant search speed even as the databases expanded in size, and when using diverse data sets.

A drawback, however, is that SISH does require a huge amount of memory. The new tool also has limited context awareness when it comes to WSIs in large tissues, and it is currently only useful in identifying single images.

But the researchers are confident their new tool represents a development in the proficiency of rare disease diagnosis and analysis.

“As the sizes of image databases continue to grow, we hope that SISH will be useful in making identification of diseases easier,” said Mahmood. “We believe one important future direction in this area is multimodal case retrieval which involves jointly using pathology, radiology, genomic and electronic medical record data to find similar patient cases.”



Link to comment
Share on other sites


AI recruitment tools are “automated pseudoscience” says Cambridge researchers

And they created a website to show you how.


AI is set to bring in a whole new world in a huge range of industries. Everything from art to medicine is being overhauled by machine learning.

But researchers from the University of Cambridge have published a paper in Philosophy & Technology to call out AI used to recruit people for jobs and boost workplace diversity – going so far as to call them an “automated pseudoscience”.

“We are concerned that some vendors are wrapping ‘snake oil’ products in a shiny package and selling them to unsuspecting customers,” said co-author Dr Eleanor Drage, a researcher in AI ethics.

“By claiming that racism, sexism and other forms of discrimination can be stripped away from the hiring process using artificial intelligence, these companies reduce race and gender down to insignificant data points, rather than systems of power that shape how we move through the world.”

Recent years have seen the emergence of AI tools marketed as an answer to lack of diversity in the workforce. This can be anything from use of chatbots and resume scrapers, to line up prospective candidates, through to analysis software for video interviews.

Those behind the technology claim it cancels out human biases against gender and ethnicity during recruitment, instead using algorithms that read vocabulary, speech patterns, and even facial micro-expressions, to assess huge pools of job applicants for the right personality type and ‘culture fit’.

But AI isn’t very good at removing human biases. To train a machine-learning algorithm, you have to first put in lots and lots of past data. In the past for example, AI tools have discounted women all together in fields where more men were traditionally hired. In a system created by Amazon, resumes were discounted if they included the word ‘women’s’ – like in a “women’s debating team” and downgraded graduates of two all-women colleges. Similar problems occur with race.

The Cambridge researchers suggest that even if you remove ‘gender’ or ‘race’ as distinct categories, the use of AI may ultimately increase uniformity in the workforce. This is because the technology is calibrated to search for the employer’s fantasy ‘ideal candidate’, which is likely based on demographically exclusive past results.

The researchers actually went a step further, and worked with a team of Cambridge computer science undergraduates, to build an AI tool modelled on the technology. You can check it out here.

The tool demonstrates how arbitrary changes in facial expression, clothing, lighting and background can give radically different personality readings – and so could make the difference between rejection and progression.

“While companies may not be acting in bad faith, there is little accountability for how these products are built or tested,” said Drage.

“As such, this technology, and the way it is marketed, could end up as dangerous sources of misinformation about how recruitment can be ‘de-biased’ and made fairer.”

The researchers suggest that these programs are a dangerous example of ‘technosolutionism’: turning to technology to provide quick fixes for deep-rooted discrimination issues that require investment and changes to company culture.

“Industry practitioners developing hiring AI technologies must shift from trying to correct individualized instances of ’bias’ to considering the broader inequalities that shape recruitment processes,” the team write in their paper.

“This requires abandoning the ‘veneer of objectivity’ that is grafted onto AI systems, so that technologists can better understand their implication — and that of the corporations within which they work — in the hiring process.”



Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites


Space-walking robot using innovative and dextrous mechanics could transform how we build things off world

Conditions in space are extreme but technology needs to be built and maintained.


A new robot, dubbed “E-walker,” with greater dexterity than any before, has been designed to help in construction projects such as the assembly of telescopes and maintenance of space stations in orbit.

More than ever before, as humanity spreads our collective wings, we need to build and maintain facilities in space. Off world, conditions are pretty extreme, and our technology has a limited lifespan due to cosmic rays and solar storms from the sun.

Extravehicular activities (endeavours performed by astronauts outside their spacecraft), robotics and autonomous systems already play a role in servicing and maintenance missionsincluding manufacturing, assembly, maintenance, astronomical and Earth observations, and debris removal.

But advances in robotics and autonomous systems are being pushed in the hope of reducing the risk to humans.

See, we’re not very well built for space. Not being able to breathe without very expensive machinery strapped to a very expensive spacesuit is a problem to begin with. Add to that the harmful rays, lack of gravity, isolation, and any number of things which can go wrong (just watch the film Gravity…) and you have yourself a pretty stressful construction site.

A team of engineers has designed a state-of-the-art ‘walking robot’ that could transform the way we do large construction projects in space. They tested their robot for the building in space of a 25-metre Large Aperture Space Telescope (LAST). Their findings are published in the Frontiers of Robotics and AI journal.

“We need to introduce sustainable, futuristic technology to support the current and growing orbital ecosystem,” explains corresponding author Manu Nair, a PhD candidate at the University of Lincoln in the UK. “As the scale of space missions grows, there is a need for more extensive infrastructures in orbit. Assembly missions in space would hold one of the key responsibilities in meeting the increasing demand.”

Space construction has become especially topical with the launch of the Hubble Space Telescope and its successor the James Webb Space Telescope. The things we’re putting into space (which require in-space assembly and maintenance) are getting bigger and more complex.

Large telescopes like these can’t be assembled on Earth and launched due to their size. They have to be assembled in space.

“The prospect of in-orbit commissioning of a LAST has fuelled scientific and commercial interests in deep-space astronomy and Earth observation,” says Nair. “Although conventional space walking robotic candidates are dexterous, they are constrained in manoeuvrability. Therefore, it is significant for future in-orbit walking robot designs to incorporate mobility features to offer access to a much larger workspace without compromising the dexterity.”

Nair’s team has proposed a robot with seven degrees of freedom, coming from seven joints in its limbs. The robot can move along a surface to perform different task and has been dubbed an “E-Walker”.

Simulations of the E-Walker’s capabilities were compared to the existing Canadarm2 and European Robotic Arm on the International Space Station.

“Our analysis shows that the proposed innovative E-Walker design proves to be versatile and an ideal candidate for future in-orbit missions. The E-Walker would be able to extend the life cycle of a mission by carrying out routine maintenance and servicing missions post assembly, in space” Nair says.

A scaled-down prototype for Earth-based tests has also been developed.

“The analysis of the prototype identifies it to be an ideal candidate for servicing, maintenance, and assembly operations on Earth, such as carrying out regular maintenance checks on wind turbines,” Nair says.

Having done the design engineering analysis, the next steps include verifying whether the robot actually works in a space build.

The E-Walker prototyping work is now in progress at the University of Lincoln which hopes the experimental verification and validation will be published separately.



Link to comment
Share on other sites

  • 1 month later...

AI may have solved a debate on whether a dinoprint was from a herbivore or meat eater

It was better than palaeontologists at guessing the correct one


Covered footprints of dinosaurs stampeding some 95 million years ago. Nearly 4000 dinosaur footprints extend over 210 sq m. Dinosaur Stampede National monument, Lark Quarry , central west Queensland, Australia. Credit: Auscape/Universal Images Group via Getty Images

An international team of researchers has, for the first time, used AI to analyse the tracks of dinosaurs, and the AI has come out on top – beating trained palaeontologists at their own game.

“In extreme examples of theropod and ornithopod footprints, their footprint shapes are easy to tell apart -theropod with long, narrow toes and ornithopods with short, dumpy toes. But it is the tracks that are in-between these shapes that are not so clear cut in terms of who made them,” one of the researchers, University of Queensland palaeontologist Dr Anthony Romilio, told Cosmos.

“We wanted to see if AI could learn these differences and, if so, then could be tested in distinguishing more challenging three-toed footprints.”

Theropods are meat eating dinosaurs, while ornithopods are plant eating, and getting this analysis wrong can alter the data which shows diversity and abundance of dinosaurs in the area, or could even change what we think are the behaviours of certain dinos.

One set of dinosaur prints in particular had been a struggle for the researchers to analyse. Large footprints at the Dinosaur Stampede National monument in Queensland had divided Romilio and his colleagues. The mysterious tracks were thought to be left during the mid-Cretaceous Period, around 93 million years ago, and could have been from either a meat eating theropod or a plant eating ornithopod.

“I consider them footprints of a plant-eater while my colleagues share the much wider consensus that they are theropod tracks.”

So, an AI called a Convolutional Neutral Network, was brought in to be a deciding factor.

“We were pretty stuck, so thank god for modern technology,” says Dr Jens Lallensack, lead author from Liverpool John Moores University in the UK.

“In our research team of three, one person was pro-meat-eater, one person was undecided, and one was pro-plant-eater.

“So – to really check our science – we decided to go to five experts for clarification, plus use AI.”

The AI was given nearly 1,500 already known tracks to learn which dinosaurs were which. The tracks were simple line drawings to make it easier for the AI to analyse.

Then they began testing. Firstly, 36 new tracks were given to a team of experts, the AI and the researchers.

“Each of us had to sort these into the categories of footprints left by meat-eaters and those by plant-eaters,” says Romilio.

“In this the ai was the clear winner with 90% correctly identified. Me and one of my colleagues came next with ~75% correct.”

Then, they went for the crown jewel – the Dinosaur Stampede National monument tracks. When the AI analysed this it came back with a pretty strong result that they’re plant eating ornithopod tracks. It’s not entirely sure though, the data suggests that there’s a 1 in 5,000,000 chance it could be a theropod instead.

This is still early days for using AI in this way. In the future. the researchers are hoping for funding for a FrogID style app which anyone could use to analyse dinosaur tracks.

“Our hope is to develop an app so anyone can take a photo on their smartphone, use the app and it will tell you what type of dinosaur track it is,” says Romilio.

“It will also be useful for drone work survey for dinosaur tracksites, collecting and analysing image data and identifying fossil footprints remotely.” The paper has been published in the Royal Society Interface.


Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites


The fastest underwater soft robot yet: Butterfly bot

The fastest ever underwater soft robot: Butterfly bot


Marine “soft robots” could help with deep ocean exploration, and cleaning up pollution such as  microplastics.

Robots made from soft material can get into places, and do tasks, that traditional, hard robot’s can’t, particularly underwater.

But they’re currently pretty slow swimmers.

Now a team of US researchers has made a soft robot that’s nearly four times faster than any other reported soft swimmer.

The researchers, who have published a description of their robot in Science Advances, based their robot design on manta rays.

“To date, swimming soft robots have not been able to swim faster than one body length per second, but marine animals – such as manta rays – are able to swim much faster, and much more efficiently,” says co-author Jie Yin, an associate professor of mechanical and aerospace engineering at North Carolina State University, US.

“We wanted to draw on the biomechanics of these animals to see if we could develop faster, more energy-efficient soft robots. The prototypes we’ve developed work exceptionally well.”

The researchers developed two types of robot, both called “butterfly bots” because the way they move looks like someone swimming with butterfly stroke.

One butterfly bot was fast, zipping through the water at 3.74 body lengths per second. The other can get to 1.7 body lengths per second but is more manoeuvrable, making sharp turns.

“Researchers who study aerodynamics and biomechanics use something called a Strouhal number to assess the energy efficiency of flying and swimming animals,” says first author Dr Yinding Chi, a recent PhD graduate of North Carolina State University.

“Peak propulsive efficiency occurs when an animal swims or flies with a Strouhal number of between 0.2 and 0.4. Both of our butterfly bots had Strouhal numbers in this range.”

The butterfly bots have soft, silicone bodies, with “bistable” wings attached. These wings can snap comfortably from one position to another and back again.

As air chambers in the butterfly bot’s body fill and deflate, the silicone bends and the wings are snapped up and down. This makes them flap, and propels the robot forward.

“Most previous attempts to develop flapping robots have focused on using motors to provide power directly to the wings,” says Yin.

“Our approach uses bistable wings that are passively driven by moving the central body. This is an important distinction, because it allows for a simplified design, which lowers the weight.”

The manoeuvrable bot has a drive unit connected to each wing, meaning one wing can flap at a time and the robot can turn.

The faster bot has just the one unit moving both wings, sacrificing agility for a lighter weight – and more speed.

“This work is an exciting proof of concept, but it has limitations,” says Yin.

“Most obviously, the current prototypes are tethered by slender tubing, which is what we use to pump air into the central bodies. We’re currently working to develop an untethered, autonomous version.”



Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites


We’re in the largest boom in the application and development of AI for science in history.

Landmark CSIRO report assesses the impact of artificial intelligence on science and its future.


A world-first report from Australia’s science agency, CSIRO, has found that scientists are adopting artificial intelligence (AI) at an unprecedented rate.

Analysing the impact of AI on scientific discovery, ‘Artificial intelligence for science’ draws insight from millions of peer-reviewed scientific papers published over 63 years and identifies key issues ahead for the sector.

The report found that artificial intelligence is now implemented in 98 per cent of scientific fields, and by September 2022 approximately 5.7% of all peer-reviewed research worldwide was on the topic.

“AI is no longer just the domain of computer scientists or mathematicians; it is now a significant enabling force across all fields of science, which is something we live every day at CSIRO, where digital technologies are accelerating the pace and scale of our research in fields ranging from agriculture to energy to manufacturing and beyond,” says CSIRO Chief Scientist Professor Bronwyn Fox.

AI in science has grown significantly since 1960

The report uses a bibliometric analysis – statistical methods analysing trends in peer-reviewed research – to determine what percentage of the 333 research fields studied were publishing on artificial intelligence between 1960-2022.

Analysing all disciplines of natural science, physical science, social science and the arts and humanities, the report found that only 14% of fields were publishing on artificial intelligence in 1960. Just over a decade later in 1972 that proportion had reached more than half, and at present sits at 98%.


Growth in AI publishing has been greatest in the past 5-6 years, with the relative share of AI publishing rising from 2.9% of all publications in 2016 to 5.7% of all publications in 2022. Among the most prolific adopters are the fields of mathematics, decision sciences, engineering, neuroscience and health professions.

“Human curiosity will always be at the heart of science, but these technologies, combined with deep domain understanding, are increasingly helping to open-up new frontiers for knowledge discovery,” says Fox.

“AI is also helping to deliver higher-impact, real-world solutions to Australia’s greatest challenges, like AI to help detect disease, predict bushfires and manage the enormous amount of data we are gathering about our universe.”

And there are no apparent signs of this current boom slowing down.

So, what does the future hold for artificial intelligence?

However, according to the report the pathway to artificial intelligence adoption and capability uplift is challenging; searchers are likely to experience both success and failure as they develop AI systems within their domains of expertise.

The report identifies six future development pathways for researchers and research organisations seeking to upgrade their AI capability for the future – harnessing the benefits while mitigating the associated risks.

  1. Software and hardware upgrades. Purpose-built processors designed for machine learning are speeding up computations, while quantum computing could lead to transformative leaps in computational power.
  2. The quest for better data. The era of “big data” may be transitioning into the era of better data. Recent breakthroughs have been achieved using smaller datasets that are well-curated, fit-for-purpose and provenance assured.
  3. Education, training and capability uplift. Between 2017-2020 the number of university courses teaching AI increased by 103%. Research organisations can take advantage of this to recruit AI talent and uplift capabilities of existing staff.
  4. Toward human centric artificial intelligence. In the vast majority of cases AI will be augmenting, not replacing, the human scientist. Issues of trust, transparency and reliability will be important for scientists and reviewers working on AI systems.
  5. Improving workforce diversity. Improving the gender, ethnic and cultural diversity of the AI research workforce will lead to better science outcomes.
  6. Ethical AI. Research organisations will be challenged to develop capabilities, technologies and cultures that deliver increasingly ethical AI.

“To make the most of this technology for Australia, there are key issues we will need to tackle. CSIRO has one of the largest teams of digital experts in the country, but these are not issues that can be solved by one organisation alone,” says Fox.

“The development of trusted, responsible and ethical AI solutions will be increasingly important globally, and because we have moved quickly to build deep expertise in the field, Australia has a unique opportunity to lead in this area.”




Edited by CaaC (John)
Spacing correction
Link to comment
Share on other sites

  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

football forum
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Recently Browsing   0 members

    No registered users viewing this page.

  • Create New...