total : 2917
aujourd'hui : 1
Article
For instance, the letter 'A' is represented by computer code—resembling the work of a computer programmer—that generates examples of that letter when the code is run.
Scientists have developed an algorithm that captures our learning abilities, enabling computers to recognise and draw simple visual concepts that are mostly indistinguishable from those created by humans.When humans are exposed to a new concept—such as new piece of kitchen equipment, a new dance move, or a new letter in an unfamiliar alphabet—they often need only a few examples to understand its make-up and recognise new instances. The research appears in the journal Science.The work marks a significant advance in the field—one that dramatically shortens the time it takes computers to 'learn' new concepts and broadens their application to more creative tasks.
In the case of writing and recognising letters, BPL is designed to capture both the causal and compositional properties of real-world processes, allowing the algorithm to use data more efficiently.Yet no programmer is required during the learning process: the algorithm programmes itself by constructing code to produce the letter it sees."Our results show that by reverse engineering how people think about a problem, we can develop better algorithms," said Brenden Lake from New York University. The researchers developed a 'Bayesian Programme Learning’ (BPL) framework, where concepts are represented as simple computer programmes..This allows them to capture the way instances of a concept vary, such as the differences between how two people draw the letter 'A’."Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science," said Salakhutdinov."Moreover, this work points to promising methods to narrow the gap China Illuminated sign Manufacturers for other machine learning tasks," said Lake.While standard pattern recognition algorithms represent concepts as configurations of pixels or collections of features, the BPL approach learns "generative models" of processes in the world, making learning a matter of 'model building' or 'explaining' the data provided to the algorithm. Also, unlike standard computer programmes that produce the same output every time they run, these probabilistic programmes produce different outputs at each execution."It has been very difficult to build machines that require as little data as humans when learning a new concept," said Ruslan Salakhutdinov from the University of Toronto.The area of research connects machine learning, statistics, and cognitive science.While machines can now replicate some pattern—recognition tasks previously done only by humans—ATMs reading the numbers written on a check, for instance—machines typically need to be given hundreds or thousands of examples to perform with similar accuracy.
Scientists have developed an algorithm that captures our learning abilities, enabling computers to recognise and draw simple visual concepts that are mostly indistinguishable from those created by humans.When humans are exposed to a new concept—such as new piece of kitchen equipment, a new dance move, or a new letter in an unfamiliar alphabet—they often need only a few examples to understand its make-up and recognise new instances. The research appears in the journal Science.The work marks a significant advance in the field—one that dramatically shortens the time it takes computers to 'learn' new concepts and broadens their application to more creative tasks.
In the case of writing and recognising letters, BPL is designed to capture both the causal and compositional properties of real-world processes, allowing the algorithm to use data more efficiently.Yet no programmer is required during the learning process: the algorithm programmes itself by constructing code to produce the letter it sees."Our results show that by reverse engineering how people think about a problem, we can develop better algorithms," said Brenden Lake from New York University. The researchers developed a 'Bayesian Programme Learning’ (BPL) framework, where concepts are represented as simple computer programmes..This allows them to capture the way instances of a concept vary, such as the differences between how two people draw the letter 'A’."Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science," said Salakhutdinov."Moreover, this work points to promising methods to narrow the gap China Illuminated sign Manufacturers for other machine learning tasks," said Lake.While standard pattern recognition algorithms represent concepts as configurations of pixels or collections of features, the BPL approach learns "generative models" of processes in the world, making learning a matter of 'model building' or 'explaining' the data provided to the algorithm. Also, unlike standard computer programmes that produce the same output every time they run, these probabilistic programmes produce different outputs at each execution."It has been very difficult to build machines that require as little data as humans when learning a new concept," said Ruslan Salakhutdinov from the University of Toronto.The area of research connects machine learning, statistics, and cognitive science.While machines can now replicate some pattern—recognition tasks previously done only by humans—ATMs reading the numbers written on a check, for instance—machines typically need to be given hundreds or thousands of examples to perform with similar accuracy.
Posté le 25/04/2021 à 05:42 par faluminctor
Catégorie sign factory
0 commentaire : Ajouter