Pinned Repositories
Act
AI without UX is of limited utility. UX is what distributes that intelligence across the organization and pushes it to the edge – where it can be consumed by practitioners and subject matter experts. Ultimately, the process of operationalizing an intelligent application within the enterprise requires some change in the organization, an acceptance that the application will evolve over time, and that it will demand downstream changes – automated or otherwise. For this to happen, intelligent applications need to be “live” in the business process, seeing new data and automatically executing the loop of discover, predict, justify on a frequency that makes sense for that business process. For some processes, that may be quarterly, for others, daily. That loop can even be measured in seconds.
Discover
Discovery is the ability of an intelligent system to learn from data without upfront human intervention. Often, this needs to be done without being presented with an explicit target. It relies on the use of unsupervised and semi-supervised machine learning techniques (such as segmentation, dimensionality reduction, anomaly detection and more), as well as more supervised techniques where there is an outcome or there are several outcomes of interest. Usually, in enterprise software, the term discovery refers to the ability of ETL/MDM solutions to discover the various schemas of tables in large databases and automatically find, join keys etc. This is not what we mean by discovery. We use the term very differently and has important implications. In complex datasets, it is nearly impossible to ask the “right” questions. To discover what value lies within the data, one must understand all the relationships that are inherent and important in the data. That requires a principled approach to hypothesis generation. One technique, topological data analysis (TDA), is exceptional at surfacing hidden relationships that exist in the data and identifying those relationships that are meaningful without having to ask specific questions of the data. The result is an output that is able to represent complex phenomena, and is therefore able to surface weaker signals as well as the stronger signals. This permits the detection of emergent phenomena. As a result, enterprises can now discover answers to questions they didn’t even know to ask and do so with data that is unlabeled.
Justify
Applications need to support interaction with humans in a way which makes outcomes recognizable and believable. For example, when one builds a predictive model, it is important to have an explanation of how the model is doing what it is doing, like what the features in the model are doing in terms that are familiar to the users of the model. This level of familiarity is important in generating trust and intuition. Similarly, in the same way that automobiles have mechanisms not just for detecting the presence of a malfunction, but also for specifying the nature of the malfunction and suggesting a method for correcting it, so one needs to have a nuts-and-bolts understanding of how an application is working in order to “repair” it when it goes awry. There is a difference between transparency and justification. Transparency tells you what algorithms and parameters were used, while justification tells you why. For intelligence to be meaningful, it must be able to justify and explain its assertions, as well as to be able to diagnose failures. No leader should deploy intelligent and autonomous applications against critical business problems without a thorough understanding of what variables power the model. Enterprises cannot move to a model of intelligent applications without trust and transparency.
Learn
Intelligent systems are designed to detect and react as the data evolves. An intelligent system is one that is always learning, lives in the workflow and is constantly improving. In the modern data world, an application that is not getting more intelligent is getting dumber. Intelligent applications are designed to detect and react when data distributions evolve. As a result, they need to be “on the wire” to detect those phenomena before they become a problem. Too many solutions provide an answer in a point of time; an intelligent system is one that is always learning through the framework outlined here. This is what defines intelligence—not a machine learning algorithm kicking out PDFs containing predictions or the results of a data scientist’s work. For the industry to continue to grow and evolve, we need to start doing a better job of recognizing what is truly AI and what is not.
MUSE-EEG
MUSE-EEG tools
Predict
Once the data set is understood through intelligent discovery, supervised approaches are applied to predict what will happen in the future. These types of problems include classification, regression and ranking. For this pillar, most companies use a standard set of supervised machine learning algorithms, including random forests, gradient boosting, linear/sparse learners. It should be noted, however, that the unsupervised work from the previous step is highly useful in many ways. For example, it can generate relevant features for use in prediction tasks or finding local patches of data where supervised algorithms may struggle (systematic errors). The predict phase is an important part of the business value associated with data science; however, generally, in predictive analytics, there exists a notion that this is the sum total of machine learning. This is not the case by far. Prediction, while important, is pretty well understood and does not, on its own qualify as “intelligence.” Further, prediction can go wrong along a number of dimensions, particularly if the groups on which you are predicting are racked with some type of bias. In and of itself, prediction is not AI, and we need to stop calling it as such.
garyramah's Repositories
garyramah/Act
AI without UX is of limited utility. UX is what distributes that intelligence across the organization and pushes it to the edge – where it can be consumed by practitioners and subject matter experts. Ultimately, the process of operationalizing an intelligent application within the enterprise requires some change in the organization, an acceptance that the application will evolve over time, and that it will demand downstream changes – automated or otherwise. For this to happen, intelligent applications need to be “live” in the business process, seeing new data and automatically executing the loop of discover, predict, justify on a frequency that makes sense for that business process. For some processes, that may be quarterly, for others, daily. That loop can even be measured in seconds.
garyramah/Discover
Discovery is the ability of an intelligent system to learn from data without upfront human intervention. Often, this needs to be done without being presented with an explicit target. It relies on the use of unsupervised and semi-supervised machine learning techniques (such as segmentation, dimensionality reduction, anomaly detection and more), as well as more supervised techniques where there is an outcome or there are several outcomes of interest. Usually, in enterprise software, the term discovery refers to the ability of ETL/MDM solutions to discover the various schemas of tables in large databases and automatically find, join keys etc. This is not what we mean by discovery. We use the term very differently and has important implications. In complex datasets, it is nearly impossible to ask the “right” questions. To discover what value lies within the data, one must understand all the relationships that are inherent and important in the data. That requires a principled approach to hypothesis generation. One technique, topological data analysis (TDA), is exceptional at surfacing hidden relationships that exist in the data and identifying those relationships that are meaningful without having to ask specific questions of the data. The result is an output that is able to represent complex phenomena, and is therefore able to surface weaker signals as well as the stronger signals. This permits the detection of emergent phenomena. As a result, enterprises can now discover answers to questions they didn’t even know to ask and do so with data that is unlabeled.
garyramah/Justify
Applications need to support interaction with humans in a way which makes outcomes recognizable and believable. For example, when one builds a predictive model, it is important to have an explanation of how the model is doing what it is doing, like what the features in the model are doing in terms that are familiar to the users of the model. This level of familiarity is important in generating trust and intuition. Similarly, in the same way that automobiles have mechanisms not just for detecting the presence of a malfunction, but also for specifying the nature of the malfunction and suggesting a method for correcting it, so one needs to have a nuts-and-bolts understanding of how an application is working in order to “repair” it when it goes awry. There is a difference between transparency and justification. Transparency tells you what algorithms and parameters were used, while justification tells you why. For intelligence to be meaningful, it must be able to justify and explain its assertions, as well as to be able to diagnose failures. No leader should deploy intelligent and autonomous applications against critical business problems without a thorough understanding of what variables power the model. Enterprises cannot move to a model of intelligent applications without trust and transparency.
garyramah/Learn
Intelligent systems are designed to detect and react as the data evolves. An intelligent system is one that is always learning, lives in the workflow and is constantly improving. In the modern data world, an application that is not getting more intelligent is getting dumber. Intelligent applications are designed to detect and react when data distributions evolve. As a result, they need to be “on the wire” to detect those phenomena before they become a problem. Too many solutions provide an answer in a point of time; an intelligent system is one that is always learning through the framework outlined here. This is what defines intelligence—not a machine learning algorithm kicking out PDFs containing predictions or the results of a data scientist’s work. For the industry to continue to grow and evolve, we need to start doing a better job of recognizing what is truly AI and what is not.
garyramah/MUSE-EEG
MUSE-EEG tools
garyramah/Predict
Once the data set is understood through intelligent discovery, supervised approaches are applied to predict what will happen in the future. These types of problems include classification, regression and ranking. For this pillar, most companies use a standard set of supervised machine learning algorithms, including random forests, gradient boosting, linear/sparse learners. It should be noted, however, that the unsupervised work from the previous step is highly useful in many ways. For example, it can generate relevant features for use in prediction tasks or finding local patches of data where supervised algorithms may struggle (systematic errors). The predict phase is an important part of the business value associated with data science; however, generally, in predictive analytics, there exists a notion that this is the sum total of machine learning. This is not the case by far. Prediction, while important, is pretty well understood and does not, on its own qualify as “intelligence.” Further, prediction can go wrong along a number of dimensions, particularly if the groups on which you are predicting are racked with some type of bias. In and of itself, prediction is not AI, and we need to stop calling it as such.