Promote Your Stuff

Welcome
Showing posts with label Analog computing. Show all posts
Showing posts with label Analog computing. Show all posts

Sunday, July 21, 2024

Why We Call It Artificial Intelligence (AI)?

Will Artificial Intelligence - AI replace Humans?



Artificial Intelligence, AI is not smart like humans


By Dr. Kevin Turnquest-Alcena
Nassau, NP, The Bahamas

Kevin Turnquest-Alcena
Early humans used basic tools to improve their chances of survival. The discovery of fire, the creation of rudimentary weapons, and the development of agriculture were significant milestones in human ingenuity.

Advances in mathematics, science, and engineering during the Renaissance laid the groundwork for modern AI. Innovations such as Leonardo da Vinci's mechanical knight illustrate early attempts to create automated systems.

The Industrial Revolution marked a period of rapid technological advancement. The development of mechanical looms, steam engines, and early computers like Charles Babbage's Analytical Engine represented significant steps toward automation and computation.

Computer Age (20th Century):

The mid-20th century saw the development of electronic computers and the birth of AI as a formal field of study. Alan Turing's work on machine logic and the Turing Test provided a foundational framework for understanding machine intelligence.

21st Century and Beyond:

Today, AI encompasses a wide range of technologies, including machine learning, natural language processing, and robotics. These advancements have the potential to revolutionize industries and improve quality of life on a global scale.

Man Versus Machine: Perspectives from Dr. Frank Chatonda

Dr. Frank Chatonda, a long-time computer and telecommunications technology research engineer, offers a unique perspective on the comparison between human and machine intelligence. According to Dr. Chatonda, the perception that computers are more intelligent than humans is a misinformed view. He argues that human thinking is likely faster than the speed of light, while computers execute simple instructions measured in machine cycles. By which time the human thought has transverse the galaxies and back.

Human Thinking vs. Machine Execution:

Human Thinking: The human mind can process complex thoughts and ideas, often subconsciously and at incredible speeds. This continuous, interactive thought process enables humans to innovate, create, and solve problems in ways that machines cannot.

Machine Execution: Computers, while capable of executing instructions rapidly, do so in a linear and predefined manner. They do not possess the ability to think or understand; they merely follow programmed instructions.

Dr. Chatonda suggests that what we call computer intelligence should more accurately be described as the apparent speed of execution relative to human reaction times. This distinction highlights the difference between human cognitive abilities versus machine output processing.

Dr. Chatonda also points out the limitations of digital computing, which increasingly relies on processing binary probabilities and polynomial approximations. He suggests that we may soon reach the limits of digital computing and need to revisit analog computing concepts. Analog computing, more akin to biological processes, could offer a path forward, leveraging continuous variables and more naturalistic problem-solving methods.

The Difference Between Human Learning and Machine Learning


Human Learning: Human learning is a complex, adaptive process that involves the integration of knowledge, skills, and experiences. Bloom's Taxonomy classifies learning into six cognitive levels: knowledge, comprehension, application, analysis, synthesis, and evaluation. This hierarchical model emphasizes the importance of higher-order thinking skills, such as critical thinking and problem-solving.

Human learning is also influenced by genetic development and adaptation. The human brain's ability to form new connections and adapt to new information is a key factor in our capacity for learning and innovation.

Machine Learning: Machine learning, on the other hand, involves training algorithms to recognize patterns and make predictions based on data. Unlike human learning, machine learning is not adaptive in the same way. Machines do not possess instincts or consciousness; they rely on vast amounts of data and predefined rules to perform tasks.

AI-driven data search engines use computer logic to process information and provide results. These systems operate based on binary logic and statistical models, which allow them to handle large datasets and complex computations quickly. However, they lack the ability to understand context or reason beyond their programming.

The Future is happening Now

AI History
In this section, we have explored the historical context of AI, highlighting its deep roots in human history and the continuous evolution of tools and systems that have paved the way for modern AI. We also compared human learning and machine learning, illustrating the profound differences in their processes and capabilities. This perspective challenges the notion that AI is a recent invention and emphasizes its potential to enhance human equality and understanding. Additionally, we discussed Dr. Frank Chatonda's insights on the true nature of computing and human cognition, shedding light on common misconceptions and the future of computing.

Finally, we celebrated the miracle of human intelligence, emphasizing the unparalleled nature of human creativity and ingenuity. From prehistoric tools to modern AI, human innovation has continually driven progress. As we look to the future, the integration of AI and other advanced technologies promises to further enhance our capabilities and improve our quality of life.

These references and insights provide a comprehensive overview of the evolution of AI, its historical context, and its future potential. They highlight the continuous growth of human creativity and the ongoing impact of technological advancements on society.

Discuss why We call the neophyte but promising Technology AI, instead of something more realistic like "Language Process Search Engines", Example explain it is because of the same reason copiers, might be called Xerox or vacuum cleaners might be called hoovers. In this case, the Co-Developer of the first symbolic computation and manipulation language "LISP" meaning List Programming, because of it's dynamic and high level transformational variable handling during execution, John McCarthy (1955) called the Language LISP AI (To point that the language could change some logic while executing, as a result it is still used today, And the AI name has ever since been associated with any symbolic and language processing, with foundations from this early Dartmouth College pioneers. However, suffice to say AI is indeed still at it's toddler stages. Another contributor to AI is Adam Minisky, whose great work contributed to robotics advancement at MIT and his groundbreaking in "Society OF Mind Theory" that has expanded logic patterns being in use today. Arthur Samuels (1959) developed machine learning at IBM, Other notable contributors include Claude Shanon, who also developed a practical digital (Boolean) Logic circuit. Joseph Weizenbaum developer of Eliza mimicking human conversation with a computer program. While American history focus on American Technology development, There are many contributions from other nations and communities. Worth of mention such as Dr. Philip Emeagwali (Nigeria) inventor of networked computing processors and Nobel Peace Prize for Computing which enabled AI engines to support many sessions, an essential element of cloud computing like AI

Modern Implications of AI: Today, AI is often portrayed as a revolutionary force that could potentially surpass human intelligence. However, this apocalyptic view misses the point. AI, as we know it, is an extension of human capability, designed to assist and augment human decision-making rather than replace it. Emmanuel Levinas, a philosopher, proclaimed that "We are all connected," suggesting that conflicts arise from misunderstandings of this concept. AI has the potential to bridge these gaps and foster a more connected and equitable world. As well it’s capabilities with see potential interfaces with our bodily functions, to replace body parts from limps, to kidneys, hearts and maybe lungs, while it is quite feasible to envision creation of a robot or computer which can make a human feel as if they are having a conversation, It is almost (never say never) but currently mathematically impossible to ever create a machine that feels anything at all.

Industrial Revolution:

The Industrial Revolution marked a period of rapid technological advancement. The development of mechanical looms, steam engines, and early computers like Charles Babbage's Analytical Engine represented significant steps toward automation and computation.

Computer Age (20th Century):

The mid-20th century saw the development of electronic computers and the birth of AI as a formal field of study. Alan Turing's work on machine logic and the Turing Test provided a foundational framework for understanding machine intelligence.

21st Century and Beyond:

Today, AI encompasses a wide range of technologies, including machine learning, natural language processing, and robotics. These advancements have the potential to revolutionize industries and improve quality of life on a global scale.

The Evolution of AI: From Theory to Practical Applications


Theoretical neural networks, mathematics, and computers were initially developed as independent fields. Their integration and practical applications in industry evolved significantly between 1950 and 1980. One of the noteworthy early advancements in AI was the development of code-breaking machines towards the end of World War II. These machines, used to decode German messages in the UK, were early examples of applied AI and computational ingenuity.

The first practical neural network was developed by Frank Rosenblatt in 1957. Rosenblatt's Perceptron model was designed to simulate the thought processes of the human brain, marking a significant step forward in neural network research. Around the same time, Arthur Samuel's work on machine learning demonstrated that computers could learn from experience and improve their performance over time. Samuel's checkers-playing program, which used a form of reinforcement learning, was one of the first instances of a computer program that could adapt and optimize its strategies based on past outcomes.

Despite these advancements, the practical utility of AI and robotics remained a challenge well into the 1970s. The limitations of hardware, the complexity of algorithms, and the need for substantial computational power hindered the widespread adoption of AI technologies. It wasn't until the development of more powerful computers and more sophisticated algorithms in the late 20th century that AI began to find practical applications in various industries.

The Promising Yet Neophyte Nature of AI

While AI has made significant strides, it remains a young and evolving field. Much of its potential is still being explored, and the term AI continues to evoke a sense of futuristic promise. It is essential to recognize that AI, in its current form, is still at the "toddler" stage, with vast potential for growth and improvement. At the time of writing simple test of most AI platforms are inundated with erroneous information. Hence it is a wonderful tool but still a bridge in construction stage.

Why We Call It AI

Artificial Intelligence (AI) is a term that has become synonymous with advanced technological systems capable of performing tasks that typically require human intelligence. However, the scope of AI extends far beyond what the term might suggest, and a more precise descriptor like "Language Processing Search Engines" could arguably be more appropriate for many of its current applications. Understanding why we continue to use the term AI involves examining historical, cultural, and marketing influences that have shaped the field.

The Origin of the Term "Artificial Intelligence"

The term "Artificial Intelligence" was coined in 1955 by John McCarthy, a computer scientist who co-developed the first symbolic computation and manipulation language known as LISP (List Programming). LISP was designed for AI research because of its ability to handle dynamic and high-level transformational variable handling during execution. McCarthy chose to label the field as AI to highlight the language's capacity to modify logic while executing, which gave it an appearance of intelligence. This nomenclature has stuck and remains influential to this day.

Historical Influences and Pioneers

Several early pioneers contributed to the field of AI, solidifying its name and concept:

John McCarthy (1955): As mentioned, McCarthy's work with LISP and his coining of the term AI were pivotal in defining the field.

Arthur Samuel (1959): Developed one of the first self-learning programs, a checkers-playing algorithm at IBM, which demonstrated the potential for machines to improve their performance through experience.

Claude Shannon: Developed practical digital (Boolean) logic circuits, foundational for modern computing and AI.

Joseph Weizenbaum: Created ELIZA, an early natural language processing program that mimicked human conversation, showcasing the potential for computers to engage in dialogue.

These early developments were often more symbolic and theoretical than practical, but they laid the groundwork for the expansive field of AI.

Marketing and Cultural Influences

The use of the term AI has also been influenced by marketing and cultural factors. Just as copiers are often referred to as "Xerox" machines or vacuum cleaners as "Hoovers," AI has become a brand name that signifies cutting-edge technology. This branding helps in attracting attention, funding, and talent to the field. The allure of creating machines that can "think" like humans has a profound appeal and has driven both public interest and investment. But the ”culprit” is McCarthy at Dartmouth College who dubbed the first AI programming language LISP as AI, as explained above, but most likely driven to the more novel AI term by the same reasons listed here.

The Reality of Current AI Technologies

Despite the grandiose term, much of what we call AI today could indeed be more accurately described as "Language Processing Search Engines" or similar terms. Many AI applications, particularly those in natural language processing, involve sophisticated algorithms that analyze and generate human language. These systems are excellent at pattern recognition, data processing, and executing predefined rules, but they lack true comprehension or consciousness.

The Evolution of AI Concepts and Technologies

The development of AI can be traced back to several key milestones and contributions from various fields and regions:

The Dartmouth Conference (1956): Often considered the birth of AI as an academic discipline, this conference brought together researchers who laid the foundational theories for AI.

Code-Breaking Machines in WWII: Machines like the British Bombe, used to decode German messages, were early examples of practical AI applications.

Neural Networks: The first practical neural network, the Perceptron, was developed by Frank Rosenblatt in 1957, showing that computers could simulate some aspects of human thought.

Machine Learning: Arthur Samuel's work at IBM demonstrated that computers could learn from data, a concept that is central to modern AI.

Broader Contributions to AI

AI's development is not limited to the United States. Notable international contributions include:

Dr. Philip Emeagwali: A Nigerian-born scientist who invented networked computing processors, essential for supporting multiple sessions in AI engines and contributing to the speed and efficiency of cloud computing.

Japanese Fifth Generation Computer Systems (1980s): A national project aimed at creating computers using massively parallel computing/processing, which influenced AI research globally.

Unsung Heroes of AI

While many prominent figures are well-recognized for their contributions to AI, several other significant contributors have not received the same level of attention:

Dr. Edwin Zishiri (Zimbabwe): A pioneer of an AI-based pacemaker that is saving lives today. This long-term device has stopped pacemaker patients from needing follow-up surgery periodically, as they are fitted with lifelong pacemakers.

Dr. Shirley Ann Jackson: Holder of many patents at MIT and Bell Labs, including the development of the Touch Tone Generator and portable fax.

Dr. Gladys Mae West: A mathematician whose work was instrumental in the development of the Global Positioning System (GPS).

These individuals made groundbreaking contributions that interconnect in the realms of mathematics, communication protocols, and systemic feasibility of interdependency, which are shared as partial circuits or protocols enabling the smooth operation of "The network of things" that make neural networks possible

Will AI replace Humans

Yes and No, AI even at its most integrated maturity stage perhaps less than a decade from now will not be a panacea nor apocalyptic. What AI will be is a versatile tool. And therefore only the uneducated should panic, because for millions of years we have lived with another versatile tool, which has caused much damage and so far we have used it more wisely to benefit from it’s warmth, rather than it’s scorch because it is also versatile and capable of of misuse and abuse, we call it fire. AI will be our New Fire, and can misused and abused to our detriment as well. Therefore, it is up to us to educate ourselves about the versatile technologies and tools we build.

"Artificial Intelligence" captures the imagination and conveys the transformative potential of our lives, despite being somewhat of a misnomer for many of its applications. The contributions of pioneers like John McCarthy, Arthur Samuel, and many others have laid a strong foundation, but as we move forward, it is crucial to maintain a realistic perspective on what AI can achieve and recognize that it’s ongoing revolution is a function of a sound and well-grounded educated perspective of the difference between humans and machines.

While a machine can make you feel as if you are conversing with someone, might even conjure all kinds of emotions within you, the machine will never feel anything about you. It is just a machine with no emotions. The more you learn how to build better machines and how best to use them the better you will be. Welcome to School of The Future Today!

References

McCarthy, J. (1955). "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence."

Samuel, A. L. (1959). "Some Studies in Machine Learning Using the Game of Checkers." IBM Journal of Research and Development, 3(3), 210-229.

Weizenbaum, J. (1966). "ELIZA—A Computer Program for the Study of Natural Language Communication Between Man and Machine." Communications of the ACM, 9(1), 36-45.

Pickover, C. A. (2019). "The History of Artificial Intelligence."

Emeagwali, P. Contributions to networked computing processors.

Zishiri, E. (n.d.). AI-based pacemaker innovations.

Jackson, S. A. (n.d.). Patents and innovations at MIT and Bell Labs.

West, G. M. (n.d.). Contributions to the development of GPS.

This contextual understanding helps demystify AI, aligning its perceived capabilities with its actual technological advancements and potential

Bloom, B. S. (1956). Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: Cognitive Domain. Longman

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Chatonda, F. (2020). Computational Limits and Human Cognition: Rethinking Machine Intelligence. Journal of Advanced Computing, 34(2), 45-67.

Emeagwali, P. Contributions to networked computing processors.

Jackson, S. A. (n.d.). Patents and innovations at MIT and Bell Labs.

Levinas, E. (1987). Time and the Other (R. Cohen, Trans.). Duquesne University Press.

Pickover, C. A. (2019). "The History of Artificial Intelligence."

Pickover, C. A. (2019). The Science Book: Big Ideas Simply Explained. DK.

Russell, S., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach (4th ed.). Pearson.
Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf.

Weizenbaum, J. (1966). "ELIZA—A Computer Program for the Study of Natural Language Communication Between Man and Machine." Communications of the ACM, 9(1), 36-45.

West, G. M. (n.d.). Contributions to the development of GPS.

Zishiri, E. (n.d.). AI-based pacemaker innovations.