Cognitive Technology
Cognitive technology is a field of computer science that mimics functions of the human brain through various means, including natural language processing, data mining and pattern recognition. It is expected to have a drastic effect on the way that humans interact with technology in coming years, particularly in the fields of automation, machine learning and information technology.
Cognitive technology is a subset of the broader field of artificial intelligence, which itself could be considered a subset of biomimetics. Although artificial intelligence has been the subject of research for a very long time, cognitive technology evolved mostly out of the internet (particularly the web and the cloud).
One notable innovation that has become emblematic of cognitive technology is IBM’s Watson supercomputer, which has a processing rate of 80 teraflops that it uses to essentially "think" as well as (or better than) a human brain. Cognitive technology has also been applied in the business sector, perhaps most famously with the streaming media service Netflix, which uses it to generate user recommendations (a function that has largely contributed to the company’s success).
Margaret Rouse Techopedia Cognitive-technology
Because cognitive technologies extend the power of information technology to tasks traditionally performed by humans, they can enable organizations to break prevailing trade-offs between speed, cost, and quality.
Deloitte Cognitive-technologies
The cognitive era is an ongoing movement of sweeping technological transformation. The impetus of this movement is the emerging field of cognitive technology, radically disruptive systems that understand unstructured data, reason to form hypotheses, learn from experience and interact with humans naturally. Success in the cognitive era will depend on the ability to derive intelligence from all forms of data with this technology.
Cognitive computing is perhaps most unique in that it upends the established IT doctrine that a technology’s value diminishes over time; because cognitive systems improve as they learn, they actually become more valuable. This quality among others makes cognitive technology highly desirable for business, and many early adopters are leveraging the competitive advantage it affords.
The report, The Cognitive advantage: Insights from early adopters on driving business value, examines emerging patterns of early adoption. These patterns reveal a blueprint of sorts for future adopters.
Adopting a new technology starts with education
Cognitive initiatives come in all shapes and sizes, from transformational to tactical and everything in between. What the most successful projects have in common, no matter how ambitious, is they begin with a clear view of what the technology can do. Therefore, your first task is to gain a firm understanding of cognitive capabilities.
The cognitive era is here not only because the technology has come of age, but also because the phenomenon of big data requires it. Computing systems of the past can capture, move and store unstructured data, but they cannot understand it. Cognitive systems can. The application of this breakthrough is ideally suited to address business challenges like scaling human expertise and augmenting human intelligence.
Becoming a cognitive business looks different for almost everyone. Although a common perception is cognitive technology is complex and difficult, that is not necessarily true. While some early adopters start with ambitions to transform their organization or industry, most start relatively small. Talk to many successful early adopters and you will hear some variation on the theme of “I want to improve one specific operational process.”
Envision the possible and define your ideal outcomes
Judging by the success of early adopters, it's no surprise more and more organizations are looking to adopt. Many are grappling with how and when, but why is most important.
No one starts down this path expressly to adopt cognitive technology; the whole point is to improve the organization. Adopting cognitive technology above all else should align to business priorities. Successful early adopters identify a problem, then build a case for how solving that problem will support specific outcomes like saving money, gaining customers or increasing revenue.
Good planning will result in the selection of a specific and strategic use case. Usage patterns tend to fall into four major categories that play to the strengths of cognitive technology.
· First, cognitive technology is often used to enable innovation and discovery by understanding new patterns, insights and opportunities.
· Second, it is often used to optimize operations to provide better awareness, continuous learning, better forecasting and optimization.
· Third, to augment and scale expertise by capturing and sharing the collective knowledge of the organization.
· Finally, to create adaptive, personalized experiences, including individualized products and services, to better engage customers and meet their needs.
One temptation, however, is to pursue cognitive technology for the technology’s sake. "Most of the failures we've seen are when you start with the technology instead of the business case," according to an IBM cognitive technology architect. "There are so many things you can do with cognitive technology, and people get really excited. But you need to focus on what impacts your bottom line.”
Conversely, overthinking can lead to inaction. According to a CEO that leverages cognitive technology, “a lot of companies are over-analyzing what they should be doing. They want a fully detailed design and guaranteed quality of output, but it doesn’t work that way. It’s better to start small with a good idea, and from there scale out and scale up. There is no universal template for success, but focus on persistence is a proven formula.”
One IBM expert described this strategy as preventing the perfect from becoming the enemy of the good. In some cases, the best advice is to select a use case quickly to overcome the inertia created by a misguided desire for perfection. Adoption can mean something as basic as tapping a pre-built cognitive application. Starting small does not prohibit future expansion, and strategy can evolve over time.
"Often what’s difficult is the trade-off of fixing current pain points and doing something that aligns with long-term vision,” according to an IBM cognitive strategy specialist. “This is where people can struggle. It’s easy to be short-term focused. The challenge is to marry fixing the current problem with making sure it is the right move for the long term. So prioritizing the right use case that balances these things is the big challenge, and it’s where we can help the client the most."
As you develop your strategy, share ideas with other forward thinkers within your organization—their support is essential—or brainstorm with a member of the IBM team.
Choose the best implementation approach for you
Once you gain a realistic understanding of what cognitive technology can do, and specifically how it will help your business, it's time to choose your approach.
1. Deploy cognitive solutions and apps.
Many early adopters know exactly where they want to install cognitive technology, so they embed readily available cognitive offerings into existing workflows. The lever of this approach is a pre-built cognitive solution, like Watson Virtual Agent or Watson Explorer. These products are already coded, and only require installation and integration with data sources up front.
2. Build your own cognitive apps.
Developers can build their own cognitive apps through Bluemix, IBM’s cloud platform. More than 40,000 developers are building with APIs (application programming interfaces). The Watson Developer Cloud offers common language descriptions, demonstrations, case studies and starter kits for each API. “It’s good to let developers get in and play around,” said an IBM cognitive expert. “Because the technology is so new, it’s almost impossible to explain everything up front. You learn a lot by doing.”
3. Collaborate to create cognitive solutions.
If your strategy is ambitious and transformational, you will likely need to collaborate on unique and customized solutions. IBM offers various advisory programs designed to support these types of initiatives in which the adopter aims to change whole business functions or ways of working and competing. These programs often deliver prototypes, or proofs-of-concept, that simulate your desired cognitive-enabled state using your own data.
IBM Getting-started-cognitive-technology
In this article...
Why Cognitive Technology May Be A Better Term Than Artificial Intelligence
Checklist for Cognitive Technology adoption:
IBM Amplify Conference: Cognitive Technology: What It Is And Why Marketers Should Care
Cognitive technology for a personalized seizure predictive and healthcare analytic device
Application of Cognitive Technology in Healthcare Systems
Advantages of Cognitive Technology
Cognitive big data analysis for E-health and telemedicine using metaheuristic algorithms
Introduction to cognitive computing and its various applications
HetNet/M2M/D2D communication in 5G technologies
Azure Cognitive Service
Cognitive Services brings AI within reach of every developer and data scientist. With leading models, a variety of use cases can be unlocked. All it takes is an API call to embed the ability to see, hear, speak, search, understand, and accelerate advanced decision-making into your apps. Enable developers and data scientists of all skill levels to easily add AI capabilities to their apps.
Azure Cognitive-services intro
Why Cognitive Technology May Be A Better Term Than Artificial Intelligence
In general, most people would agree that the fundamental goals of AI are to enable machines to have cognition, perception, and decision-making capabilities that previously only humans or other intelligent creatures have. Max Tegmark simply defines AI as “intelligence that is not biological”.
At the most abstract level, AI is machine behavior and functions that mimic the intelligence and behavior of humans. Specifically, this usually refers to what we come to think of as learning, problem solving, understanding and interacting with the real-world environment, and conversations and linguistic communication.
PROMOTED
Saying AI but meaning something else
There are certainly a subset of those pursuing AI technologies with a goal of solving the ultimate problem: creating artificial general intelligence (AGI) that can handle any problem, situation, and thought process that a human can. But the majority of those who are talking about AI in the market today are not talking about AGI or solving these fundamental questions of intelligence. Rather, they are looking at applying very specific subsets of AI to narrow problem areas. This is the classic Broad / Narrow (Strong / Weak) AI discussion.
Since no one has successfully built an AGI solution, it follows that all current AI solutions are narrow. While there certainly are a few narrow AI solutions that aim to solve broader questions of intelligence, the vast majority of narrow AI solutions are not trying to achieve anything greater than the specific problem the technology is being applied to.
What interests enterprises most about AI is not that it’s solving questions of general intelligence, but rather that there are specific things that humans have been doing in the organization that they would now like machines to do. The range of those tasks differs depending on the organization and the sort of problems they are trying to solve
Rather than trying to build an artificial intelligence, enterprises are leveraging cognitive technologies to automate and enable a wide range of problem areas that require some aspect of cognition. Generally, you can group these aspects of cognition into three “P” categories, borrowed from the autonomous vehicles industry:
I. Perceive
– Understand the environment around you and input coming from sensors.
Perception-related cognitive technologies include image and object recognition and classification (including facial recognition), natural language processing and generation, unstructured text and information processing, robotic sensor and IoT signal processing, and other forms of perceptual computing. Perception-focused capabilities is the area of AI research that got the biggest boost from the development of advanced neural network approaches, and Deep Learning in particular.
II. Predict
– Understand patterns to predict what will happen next and learn from different iterations to improve the overall performance of the system.
Prediction-focused cognitive technologies utilize a range of machine learning, reinforcement learning, big data, and statistical approaches to process large volumes of information, identify patterns or anomalies, and suggest next steps and outcomes. Neural networks are helpful here, but so are other ways of doing machine learning as well as even simpler approaches such as knowledge graphs and statistical Bayesian models. Prediction-focused cognitive technologies span the range from big data analytics to complex, human-like decision modes.
III. Plan
– Use what was learned and perceived to make decisions and plan next steps.
Planning-focused cognitive technologies include decision-making models and methods that try to mimic how humans make decisions. Early attempts include expert systems. More recent methods use a range of approaches that are used in situations such as cognitive-enabled cybersecurity or loan decisions. Planning-focused cognitive technologies is the area that can use greater AI-general research to improve as currently machines lack intuition, common sense, emotional IQ, and other factors that make humans much better at planning and decision-making.
From this perspective, it’s clear that while cognitive technologies are indeed a subset of Artificial Intelligence technologies, with the main difference being that AI can be applied both towards the goals of AGI as well as narrowly-focused AI applications. On the other-hand, using the term cognitive technology instead of AI is an acceptance of the fact that the technology being applied borrows from AI capabilities but doesn’t have ambitions of being anything other than technology applied to a narrow, specific task.
Surviving the next AI winter
The mood in the AI industry is noticeably shifting. Marketing hype, venture capital dollars, and government interest is all helping to push demand for AI skills and technology to its limits. Companies are quickly realizing the limits of AI technology and we risk industry backlash as enterprises push back on what is being overpromised and under delivered, just as we experienced in the first AI Winter. The big concern is that interest will cool too much and AI investment and research will again slow, leading to another AI Winter. However, just as the Space Race has resulted in technologies with broad adoption today, so too will the AI Quest result in cognitive technologies with broad adoption, even if we never achieve the goals of AGI.
Kathleen Walch Forbes Why-cognitive-technology-may-be-a-better-term-than-artificial-intelligence
Checklist for Cognitive Technology adoption:
- What are my desired outcomes?
- How will cognitive technology help me achieve these outcomes?
- What is my long-term vision with this technology?
- Do I have strong executive support?
- Can my organization adapt existing processes and roles?
- Do I have the necessary skills within my organization?
- Do I have the IT environment I need to get started?
- Which path is right for me: build, deploy or collaborate?
- How will value be measured?
IBM Amplify Conference: Cognitive Technology: What It Is And Why Marketers Should Care
At IBM’s recent Amplify conference, there was a heavy focus around cognitive technologies, an emerging area that integrates data mining, pattern recognition and natural language processing to mimic the way the human brain works. To better understand what it is and why CMOs should care, I consulted with Harriet Green, General Manager, Watson Internet of Things, Commerce and Education, IBM.
Kimberly Whitler: Before we dive into the cognitive waters, I have to ask about a recent announcement IBM made regarding Apple Pay. How does this impact marketers?
Harriet Green: Absolutely. This new Apple Pay on the web capability makes it easier and more secure for our Commerce clients to complete a transaction. That’s a big deal when you consider that four of the top five reasons users abandon the checkout process is due to the logistics of entering information through desktop or mobile according to BI Intelligence.
When you dig down, the impact is far greater. As with cognitive, it dramatically improves the shopping experience for customers and that’s one of the biggest, if not THE biggest, priority for CMOs today. When you make every step of a customer’s journey brilliant, they will come back for more, they will talk you up to their friends and become advocates for you in social media circles. That’s how you create loyal customers which is something every CMO cares about very deeply.
Whitler: You mentioned cognitive. At IBM’s recent Amplify conference there was a heavy focus around cognitive technologies. Why should marketers care about cognitive?
Green: For marketers, one thing has always remained constant, the customer. But today’s customers are sharing incredible amounts of information and they expect marketers to use each piece to personalize every little interaction on their journey. At first glance that sounds easy enough but consider this—every year hundreds of zettabytes of data are being generated. In fact, 90% of the data in the world today has been created in the last two years alone. That’s a huge number and much of it’s coming from consumers like you and me. Now of course we would all like this information served up in an easy-to-digest pie chart, but the reality marketers face is far different. Much of this is dark data, messy, human information and it’s passing right by marketers unnoticed and unused. To say this is a huge missed opportunity is a massive understatement. This is why cognitive computing is so incredibly important.
Cognitive technologies are unlike anything that’s come before. They use natural language processing and machine learning to understand, reason and learn and in doing so put each customer at the very center of every marketing campaign. We’ve seen this with Watson and now we are seeing it with marketers as well as merchandisers and ecommerce practitioners. By putting these same cognitive capabilities into the hands of marketers, this data is no longer hidden in the dark. Marketers can instantly gain a deeper view into the bigger world, discover new patterns, get to know each customer on unimaginable levels and be agile enough to shift campaigns on the fly. We can also combine this with data from other sources such as social media and weather which raises our understanding of customers to entirely new levels.
Whitler: Any examples of how cognitive could impact a retail business?
Green: When we were in Tampa for Amplify there were two examples that struck me most. The first involved our customer Performance Bicycle. Performance Bicycle is the number one specialty bicycle retailer in the U.S. and a longstanding IBM customer. They also share our cognitive vision.
Imagine that a prospective customer expresses a newfound interest in cycling. Taking the first step in a new area like cycling can be daunting and many consumers would probably drive to the cycling shop where a store representative guides them through the entire process. But this buyer is an avid online shopper so she does her research from home. As she pages through the Performance Bicycle site, the company knows she’s a new prospect by looking at her web behavior, social media posts, and in-store activity. They also see that she is early in her buying journey and support her with the right content, such as tips for getting started, good cycling routes in their area, and of course, gear for beginners. Cognitive makes taking that initial step much easier but it doesn’t stop there.
As this customer becomes a more avid cyclist (and engages in the website more often), this is where cognitive really creates value. As I eluded to earlier, cognitive is agile and it adapts to these changes and presents her with different content, such as race schedules, maintenance tips that she can do herself, and more sophisticated gear for longer rides. And it does this all automatically. It understands her and it understands the content and puts them together in real time for a remarkable, end-to-end client experience.
The second example came from one of our Design Studio employees who created an entire campaign using two things, her voice and Watson, the ultimate agile marketer. That's it. As you have probably heard already, Watson uses natural language processing and machine learning to reveal insights from the hundreds of zettabytes of data I mentioned earlier. By using Watson, she described her target audience and then Watson quickly responded with the full list of targets and recommendations, everything from which campaign to use and the content and offers it thought would resonate best. It even took into account current weather factors and suggested a different set of product and offers for this audience. With these elements locked in she simply instructed Watson to launch the campaign. The entire exercise took just a few minutes. It's a pretty transformative experience for marketers.
Whitler: Earlier you mentioned cognitive’s ability to understand, reason and learn. Are these technologies going to replace marketers?
Green: No. What cognitive does is help marketers focus less on tedious day-to-day task and more on the bigger picture, delighting the customer. Today, marketers spend nearly 70% of their time on mundane details and just 30% strategizing and creating experiences for customers. Mention this figure to any marketer and the response is unanimous--we need to spend more time designing and planning truly compelling and personalized, customer-driven marketing initiatives that bring the customer to the center of everything. Technologies such as cognitive help to flip that 70/30 figure, freeing marketers to focus the majority of their time and effort on personalizing the entire journey for each and every customer.
Kimberly A. Whitler ...Forbes Cognitive Technology: What It Is And Why Marketers Should Care
Cognitive technology for a personalized seizure predictive and healthcare analytic device
Cognitive technology
The term cognition indicates the mental ability to learn from experience, mistakes, etc. Cognitive technology refers to the technology that helps machines to possess mental ability to mimic humans [11]. The purpose of cognitive technology is to infuse intelligence into the already prevailing nonintelligent machines. It is the evolution of devices into cognitive, that is, intelligent devices. It mimics human behavior and learns in a similar way to how humans evolve from childhood to adulthood based on experiences, mistakes, and different scenarios.
Similarly, applying cognition to devices helps them to think, analyze, and make decisions. Cognitive technology can be termed as a limited addition of artificial intelligence [12]. It can be understood in a better way through the following principles. Table 2.3 describes the principles and their characteristics of cognitive technology.
Table 2.3. Principles and characteristics of cognitive technology.
S. no |
Principle |
Characteristics |
1 |
Interprets |
• Understands data received from various sensors • Utilizes technologies such as computer vision, natural language processing to understand data |
2 |
Learning |
• Mimics human behavior of learning strange things and iterates multiple times to make correlations and patterns in the data |
3 |
Prediction |
• Predicts problems and learns from mistakes, and improves its working efficiency for better results. Technologies like deep learning, machine learning, and statistical reinforcement learning can detect anomalies and patterns to predict future problems |
•
Application of Cognitive Technology in Healthcare Systems
a.
Cognitive technology is used to develop a personalized healthcare system.
b.
It is used to develop a remote patient monitoring system [13]. The advantages of the remote health monitoring system are:
✓
Improved patient management;
✓
Predicts and prevents sudden deaths;
✓
More reliable and cost-effective;
✓
Better standard of health care;
✓
Better rate of accountability.
c.
The healthcare system captures the corporal parameters of patients from various sensors or devices attached to the patient. Then, it analyzes the data, understands the patterns in the data, and predicts sudden health-hazardous problems.
d.
Patients lose their lives due to inadequate manual monitoring in emergencies. In contrast, this system saves lives and protects patients from sudden deaths and emergencies due to its highly effective system.
e.
The system also keeps all the patient's medical records collected and classifies the data, which can be helpful for the patient's future medication.
•
Advantages of Cognitive Technology
a.
Greater efficiency;
b.
More reliability;
c.
The emergence of e-medicine or tele-medicine;
d.
Development of digital communication between patients and health professionals;
e.
Protection and monitoring all the time, proactively; helps in decreasing the number of sudden deaths.
Human Interfaces
Alonso H. Vera, in Human Factors in Information Technology, 1999
INTRODUCTION
Amongst its goals, Cognitive Technology (CT) proposes to create a new set of methodologies for understanding the interrelationships that are possible between humans and machines. In contrast to traditional Human Computer Interaction (HCI) approaches, the proposed goal of CT is one of creating tools that further culture, society, and human interaction. The CT view attributes the problems of HCI to a misguided focus on making smarter machines rather than smarter humans. This problem is seen as compounded by practitioners’ lack of awareness of the subjective roles they play in their “science” (Gorayska and Marsh, 1996).
I will argue that the main problem with computer technology today has little to do with general issues of the relation between researcher, humans, and the environment, but instead with the specific lack of application of user-centred design methodologies. It also has little to do with attempts to anthropomorphic computers. Current research in HCI is directed toward developing interfaces that assist humans where they are limited (e.g., working memory) and that intelligently support activities at which humans are better (e.g., decision making). The main reason why today’s technology does not seem particularly suitable for common human use is not that the methodologies of HCI have failed but that they have seldom been used. When they have been carefully applied notable success stories have resulted (see Landauer (1995) for evidence on this issue).
Methodologies such as cognitive modelling allow us to characterise the cognitive mechanisms, processes, and constraints involved in the performance of specific tasks. If these tasks happen to be technology-based, then these methodologies shed light on how our cognitive processes are enhanced or impeded by the technology. Granted, it tells us little about the socio-cultural impact of new technologies but I would argue that those effects are largely independent of the evolution of our cognitive facilities. So, although there has been and will continue to be significant social and cultural change as a consequence of technology, it does not mean that social change translates into genuine changes in our cognitive makeup.
Human Interfaces
Alex Kass, Joe Herman, in Human Factors in Information Technology, 1999
INTRODUCTION: WHEN THE COGNITIVE “SIDE-EFFECTS” ARE REALLY THE MAIN EVENT
The emerging field of Cognitive Technology highlights the cognitive effects that interactive systems have on those who use them. An interactive system can help users perform better than they could unaided, but it can also, often unintentionally, pressure users to think about the task differently, and often in ways that are less natural and less desirable. If insufficient attention is given to the cognitive ergonomics of an interactive system it can cause cognitive strain, stunting, or maladaption in the same way that improper use of a keyboard can lead to carpal tunnel syndrome. It is, therefore, important for cognitive technologists to discover techniques and principles to allow designers to predict and control the cognitive influence that interactive systems will have on their users.
While much of the focus of human factors research is about the unintended cognitive side effects, there is another side to this issue of cognitive influence which should not be overlooked. Cognitive influence is not always an unintended side effect of some other activity: What about the times when we want the use of computer programs to change the way people think? For example, modification of cognition is the primary desired effect of an educational activity. In computer-based learning-by-doing environments the role of the main effects and side effects are exactly the reverse of most interactive software: The user (the student) works to accomplish a task in a computer-based environment which is designed to cause cognitive change. The principles that are employed to minimise undesirable cognitive effects and those developed to maximise desirable ones should bear a strong relationship to each other. Thus, in this chapter we hope to shed some light on these issues by what we have learned about how to design desirable cognitive effects into a particular class of computer-based learning environments in which the student’s task is to produce a well-reasoned recommendation.
Cognitive big data analysis for E-health and telemedicine using metaheuristic algorithms
Deepak Rai, Hiren Kumar Thakkar, in Cognitive Big Data Intelligence with a Metaheuristic Approach, 2022
7.2 Applications of metaheuristics in cognitive big data–based healthcare
Some applications of metaheuristic algorithms with cognitive technologies for healthcare systems are as follows:
•
In Ref. [26], Fruitfly Optimization, a metaheuristic algorithm, was used with a support vector machine (SVM) algorithm for the analysis of Wisconsin breast cancer dataset, Pima Indians diabetes dataset, and Parkinson's dataset for Parkinson disease.
•
In Ref. [27], the Genetic Algorithm (GA), a metaheuristic approach, was used with the SVM algorithm to diagnose diabetes. GA was used for feature selection, and SVM was used as a classifier. The diagnosis was also conducted using the K-means clustering algorithm without any optimization approach. It was found that metaheuristic-based SVM classification has 2.08% more accuracy than the K-means algorithm.
•
In Ref. [28], Firefly Algorithm, a metaheuristic optimization approach, was used with SVM for predicting malaria transmission. The prediction was also made using SVM only without any optimization approach. It was found that the SVM with a metaheuristic approach performed better than the only SVM.
•
In Ref. [29], Chaos Firefly Algorithm was used for optimizing the computational burden of the Interval Type-2 Fuzzy Logic System for the diagnosis of heart disease.
In summary, it can be quickly concluded that these metaheuristics-based optimization approaches will become an essential part of data extraction, data preprocessing, and large-scale data analytics in the cognitive big data analytics–based healthcare system.
Human Interfaces
Benny Karpatschof, in Human Factors in Information Technology, 1999
Point 2. The real scientific relevance of Cognitive Technology
What then is the real perspective of Cognitive Technology? There is an aspect of the externalisation tendency in cultural development that has not been mentioned until now. Parallel to the tendency to externalise human activity into tools and techniques, there is a reverse tendency of re-internalisation. That means that whenever we have produced some artifact or externalised knowledge, we have the opportunity of a confrontation with this external picture of ourselves.
Thus the rise of a mechanical technology from the late Middle Ages was the material precondition of the development of a scientific physiology. When Harvey had the bright idea of understanding the heart as a pump, he did so on the basis of the construction of a pump that already was an externalisation of human activity, namely the activity of moving a liquid.
In the process of reinternalization, we make what appears to be a category mistake of reducing human abilities and processes to human artefacts. This reductionism is, however, sometimes fertile or even correct; whenever the artifact is already an externalisation of human abilities and processes.
Thus the thesis of strict AI, that there is no fundamental difference between the human mind and the computer, is a gross exaggeration of the plausible thesis that we have constructed the computer as an externalisation of mental tasks, and that we therefore have an opportunity of studying aspects of these tasks in their externalised form.
In this light, Cognitive Technology not only has the necessary task of finding inspiration from disciplines such as cognitive psychology and linguistics. It has also a great potential of inspiring those disciplines.
Human Interfaces
BarbaraGorayska , Jonathon P. Marsh, in Human Factors in Information Technology, 1999
Background
To date, the effort spent to define Cognitive Technology (CT) as a distinct field of inquiry, has emphasised the need to pragmatically understand the dialectic relationship between the use of augmenting artefacts and the process of cognitive adaptation resulting from exposure to fabricated environments1. The central position has been defined as the need to study human cognitive inputs to the integration between people and tools and in so doing produce greater a priori insight into the socio-cognitive impact of technological innovation. We need to do this in order to directly benefit people rather than simply facilitate and speed up technological progress. If we do not, the value to the user of the ensuing form of “user centred” tool design may remain essentially a matter of rhetoric.
It has been further proposed that, in order to achieve this objective, cognitive technology studies must first be grounded in a coherent theory of adaptation, with defining principles of human-artefact integration through interaction which can be brought to bear on the process of designing technology. What has emerged from these considerations is a number of critical issues that need to be addressed by anyone interested in investigating the co-evolution of tools and the minds that create them. Of particular interest are the issues embodied within two key CT questions:
How can we define, predict, and recognise the threshold at which technological enhancement of human ability/performance becomes a constraint on that very ability/performance?
and/or
How do we design humane user-tool interfaces?
However it is not sufficient to simply ask these questions. We need also to consider how and to what end they have been formulated? Are they properly situated within an appropriate conceptual framework? To what degree does the language they are framed in condition the kind of answers we look for?
Introduction to cognitive computing and its various applications
SushilaAghav-Palwe, Anita Gunjal, in Cognitive Computing for Human-Robot Interaction, 2021
Case study: personal travel planner to simplifying travel planning by WayBlazer
WayBlazer has created personal travel planer by using power of cognitive technology to make it easy for travelers to plan for trips (Makadia, 2019). Traveler can ask questions in natural language. By gathering and analyzing trip data as well as knowledge regarding traveler likings, the tourism agent asks simple questions and offers personalized results. To finalize the travel journey, time for hotel booking, flight search can be saved by cognitive tool. Travel agents have used this technology effectively which is important to improve their sales and customer loyalty both hand in hand.
Human Interfaces
Colin T. Schmidt, Patrick Ruch, in Human Factors in Information Technology, 1999
PERSPECTIVES OF ENQUIRY & TECHNOLOGY ASSESSMENT
This departure from mainstream TA, HCI (and maybe even CT) could be a result of the instruments used, those developed with extreme care for long-term research programmes. In fact, acting in this manner preserves the more ideal tenets of the institutional cause in the end. Would this not be desirable? Along our journey we will cover some TA components and focus on dialogical systemic models, specifically the one developed by French philosopher Francis Jacques for human interaction (Jaques, 1985)3, which Schmidt subsequently “applied “ to an HCI conceptualisation in order to compare human communication and human-machine interaction as a means to advancing the latter by properly differentiating it from the former. It strongly seems to be the case that people designing machine instructions who choose to filter technical ‘how-to-do-iť questions through the sieve of a dialogical stance become further concerned with pragmatic issues, indeed both those of HCI and the Cognitive Sciences in general. This broader investigative scope better enables them to express just how their goals come about (schmidt, 1997b), not unlike the breadth of enquiry dear to Gorayska& Marsh (1996)4. An example of this would be a designer having an acquired understanding with respect to the origins of pressure brought to bear in a hierarchical way upon his daily professional activities, (such as from the social, psychological and economic nature of a particular client for whom his superiors are contracting). The intention here is to aid the evolution of these models in order to be able to integrate a further enhanced social perspective required for TA problems. Obviously we are working on surpassing the technical issues of TA., which are also ‘how-to-do-iť questions though they involve a different subject matter than those of concern to the HCI field (i.e. selection criteria concerning projects and entrepreneurial activities), with an aim to looking into fundamental questions…. the ‘why?’ questions. While the ‘how?’ questions of HCI seem to pervade the community (and it is easy to see why this would also be the case in TA), the ‘why?’ questions really are quite personal in nature and verge on entering (or even blatantly enter) the realm of ethics and morality with respect to one’s choices regarding technology. Just as the pragmatic understanding of the HCI designer could over-extend its usual hierarchical limits, the technological assessor could perform evaluation based on informed views on world improvement rather than abide by the priorities of Congress: the question becomes a version of “should I actually act taking this information into account, or ignore it and just follow orders?”
Increasing the pragmatic dimension of one’s understanding of a situation evidently allows one to take a step back from one’s work and, in a way, critique the field in which one works using positive suspicion. In the HCI field, the critical eye of the observer has brought about a greater acknowledgement of the role of this Self in design activities (Schmidt, 1997b) by integrating the designer himself into a model of the interaction between man and machine. Thus stressing the fact that the relationship between the two can be at best pseudo-referential (Schmidt, forthcoming). The technological conclusions that may immediately be drawn for that field are, in short, that because of the designer’s necessary personal interference, the autonomy of the machine is dependant on Man. This dependency destroys communication in the full human sense of the word. (Fortunately, for if communicative autonomy in machines were possible, the relative horrors of science fiction would become reality!) Furthermore, users will be lead astray if encouraged fiction would become reality!) Furthermore, users will be lead astray if encouraged to personify machines as long as the function of true reference is not available to their relationship. Limiting the creation of the appearance of human intelligence should be a major principle of HCI theory to be respected by practicians.
So be it for HCI. This brings us to the question of what the pragmatic approach of dialogism can do for Technology Assessment. Using analogical reasoning, the integration of assessors into the TA process itself could prove to be just as beneficial as such action was in the HCI field, but one has to beware of the temptation to generalise afore-gained knowledge without thorough verification. The destination field usually will have different constraints. When undergoing transportation from HCI to conceptual Technology Assessment, the “What do we wish to achieve?” agenda of investigation—complete with the whys and wherefores of those goals—tends to be reset again; this time its expression carries the philosophically interrogative tone similar to that of “Who are we and who do we want to be?” questions. This is especially true when one considers the fact that technology pinpoints the intentional limits of policy (i.e. the unpredictable consequences of technological choice, our ignorance of natural events, etc.), which has implications for the future of our social well being. Therefore, HCI models—those rooted in dialogism—revealing an undesirable prevalence of domain intra-theoretical design principles can henceforth be transposed onto TA, for the benefit of TA. Through the a priori nature of the questions involved—in relation to the genesis of a technical object—, the aim is to achieve a constructive TA that will play a dynamic role in the decision process.
Let us take a look at some TA details to get a feel for the domain. The Office for Technology Assessment (OTA), the very first parliamentary TA entity, came into being in the US Congress during the early seventies. Its goal was originally to give the people’s representatives (legislative power) proper means to assess the administration’s (executive power) technological decisions, through the creation of an independent mixed committee composed of scientists and representatives. Since then, numerous TA offices of various forms have come to life in Western Europe, and lately in East European countries (Ruch, 1995)5. They however are not all faring well. For instance, though the American OTA still exists legally, since 1995 its money flow has been severely cut back.
We have defined three levels of TA activity (Ruch, 1995): 1) the economical/ecological level redefines a new value, no longer the classical economic added value, but a “life” value; 2) the epistemological level confronts the independence of experts and theories with the social background; and 3) the political level asks “who does what in the decision-making process?” We note that, although the economical/ecological and epistemological levels sit nicely at home with both TA and CT/TC, the political level is TA specific. This level of enquiry constitutes a shift from authentic intersubjective dialogism to “subject-only supported dialogism” (the subject being Congress), that lends itself to support discourses like, “which words are to be used in order to make our decisions acceptable?” In our opinion this is a dialogical. The constructive turn in European TA has made a fundamental improvement in what concerns this level, allowing TA to perform its true work of critiquing technology6. The general idea behind TA should be more cogitable now.
Any decision-making process entails a cognitive system. The notion of “cognitive system”, quite vast indeed, tends to be understood in a variety of ways. The term “cognitive system” is often used to refer to a self-sufficient psychological unit, but sometimes it refers to a functionally partial one, as some open unit capable of supporting social cogitative processes or ‘swarm intelligence’ and so forth for group decisions. These all being homonymous, “cognitive system” has different referential functionalities in the machine fabrication fields stretching across traditional HCI (one machine, one user) to the newer CSCW field (one Machine for many users), and something beyond this in future (?). Let us integrate the notion of cognitive system into our study with a view to recovering the implications of working in technological fields; such analytical practices instigated the beginnings of Cognitive Technology, a field of study that has been largely overlooked because of the implicitness involved in epistemological shifts in a field where the tangibility of the final product seems to fill up too much of the picture. Users’ needs may represent a question, technological endeavours at the interface may represent the response. And the other way around, simultaneously. Genuine communication is thus established in a conceptual model. Analogical thinking in coordination with erotetics (the logic of questions and answers) will produce the verification required of the resulting relationships in order to establish whether or not communication may hold in other situations. If a concept can be both a question vis-à-vis another concept, and the response to the question it triggers in that same concept (and vice versa), a genuine contract of communication is “signed” between the two.
Human Interfaces
Myron W. Krueger, in Human Factors in Information Technology, 1999
One ingredient that few have missed in the human interface is smell, but a discussion of sensory representation would not be complete without considering it. Indeed, smell has surprising possibilities as a component in cognitive technology. Throughout the ages, many including Marcel Proust has commented on the fact that odours trigger memories. In fact, odour cannot only trigger memory (Proust, 1913-27), it can also improve memory as has been shown in many studies (Engen, 1991). Olfactory memory extinguishes much more slowly that other kinds of memory. In addition, olfactory stimuli can be used to improve human performance (Baron and Bronfen, 1994; Rottman, 1989). There have been a number of studies that show that odours can alter moods (Hashimoto et al., 1988, 1994). However, they can also improve performance at vigilance tasks (Warm et al., 1991) and spatial reasoning tasks (Knasko et al., in preparation).
HetNet/M2M/D2D communication in 5G technologies
AyaskantaMishra, ...Raed M. Shubair, in 5G IoT and Edge Computing for Smart Healthcare, 2022
3.7.1.1 5G5G system architecture
5G5G possesses an advanced architecture with upgraded network elements to cope with new technologies. The service providers use these advance features to provide value-based services. The main feature of 5G5G is inclusion of cognitive technology which is able to identify its geographical parameters such as temperature, weather, location, etc. By using this technology, the 5G5G terminals act as transceiver by responding to radio signals of local environment and continue to provide quality of service (QoS). The system model of 5G5G technology is totally based on IP for all wireless technologies. The 5G5G system consist of two components: (1) User Terminal (cell phone) (2) Radio access technologies which are independent and autonomous.
These radio technologies establish paths as IP links for the public domain or internet world. With the IP technology, the data routing is managed and control with respect to specific application or session established between the clients present here and the server somewhere else in the internet. For smooth routing, packets routing needs to be fixed according to the application policy.
VishaltejaKosana, ... Abu ul Hassan S. Rana, in Cognitive and Soft Computing Techniques for the Analysis of Healthcare Data, 2022
Sciencedirect Cognitive-technology
Disclaimer: Some of the contents of this website have been taken from various sources on the internet. If you find any content that should be removed from this site because of Copyrights, please send a message and it will be promptly removed.
|
|
|
Home/ Info/ Products/ BIG TECH Metaverse Metaverse Vs. Virtual Reality PC Buyers Guide/ IEEE 802 Standards Social Media Platforms Computer & IT Certifications Processor Generations Memory DDR3 Vs. DDR4 SSD Vs. HDD SAS vs. SATA HTML 5G Android Tips and Tricks STEM Business Intelligence Tools Web Intelligence Quantum Computing Artificial Intelligence (AI) ChatGPT Robotics Internet of Things (IOT) Web Of Things (WoT) Renewable Energy Nano Technology Cleantech Ag/Agro/Agri Tech Office Suites Windows Run Commands Hiren's Boot CD Benchmarks Android Vs. IOS Mac Vs. PC Mac Keyboard Shortcuts Linux CLi Commands Venus Project/ Computer Security and Law Techno Lingo Encyclopedias Search Engines Glossary Online Jobs Contact
Active Components Passive Components Test Electrical Components Electronics Classification
AWS Certification Google Certification Oracle Certifications cisco certifications Huawei Certification Microsoft Certifications Linux Certification Business Certifications
Google-Cloud-Platform-Guide Amazon-Web-Services-Guide Global-Cloud-Infrastructure-Of-AWS Amazon-Web-Services-Cli-Guide AWS-Cloudformation Devops Microsoft-Azure Oracle-Cloud Digitalocean-Cloud Openstack-Cloud IaC CloudFormation Anatomy Terraform Summary Edge Vs. Cloud Vs. Fog Computing Security Topics
Certified Enterprise Blockchain Professional (CEBP) Web 3.0 Satoshi Nakamoto Cryptocurrency Dark Web Ethereum NFT Merkle Tree El-Salvador eNaira Challenges Of Crypto To Cash
Web C++ JAVA Python Python Glossary Angular.js Scala
Copyright BICT Solutions Privacy Policy. | Terms and Conditions apply | All rights reserved.