The information an organization gets from its computer sysms, Davenport argues, can be far less useful than what comes in Patterns, Systems, and Messes from other sources in the overall ecology of information, as pro cessed by people. And a search engine may give you massive data, but no context for understanding, let alone wisdom about that in formation. What makes data more useful is the person curating it.
Ideally, the person who curates information will zero in on what matters, prune away the rest, establish a context for what the data means, and do all that in a way that shows why it is vital-and so captures people's attention.
The best curators don't just put the data in a meaningful context-they know what questions to ask. When I interviewed Davenport, he was writing a book that encourages those who man age big data projects to ask questions like these: Are we defining the right problem? Do we have the right data? What are the as sumptions behind the algorithm the data gets fed into? Does the model guiding those assumptions map on reality?
At an MIT conference on big data, one speaker pointed out that the financial crisis of 2008 onward was a failure of the method, as hedge funds around the world collapsed. The dilemma is that the mathematical models embodied in big data are simplifications. De spite the crisp numbers they yield, the math behind those numbers hinges on models and assumptions, which can fool those who use them into placing too much confidence in their results.
At that same conference, Rachel Schutt, a senior statistician at Google Research, observed that data science requires more than math skills: it also takes people who have a wide-ranging curios ity, and whose innovation is guided by their own experience-not just data. After all, the best intuition takes huge amounts of data, harvesting our entire life experience, and filters it through the hu man brain.
The information an organization gets from its computer sysms, Davenport argues, can be far less useful than what comes in Patterns, Systems, and Messes from other sources in the overall ecology of information, as pro cessed by people. And a search engine may give you massive data, but no context for understanding, let alone wisdom about that in formation. What makes data more useful is the person curating it.
Ideally, the person who curates information will zero in on what matters, prune away the rest, establish a context for what the data means, and do all that in a way that shows why it is vital-and so captures people's attention.
The best curators don't just put the data in a meaningful context-they know what questions to ask. When I interviewed Davenport, he was writing a book that encourages those who man age big data projects to ask questions like these: Are we defining the right problem? Do we have the right data? What are the as sumptions behind the algorithm the data gets fed into? Does the model guiding those assumptions map on reality?
At an MIT conference on big data, one speaker pointed out that the financial crisis of 2008 onward was a failure of the method, as hedge funds around the world collapsed. The dilemma is that the mathematical models embodied in big data are simplifications. De spite the crisp numbers they yield, the math behind those numbers hinges on models and assumptions, which can fool those who use them into placing too much confidence in their results.
At that same conference, Rachel Schutt, a senior statistician at Google Research, observed that data science requires more than math skills: it also takes people who have a wide-ranging curios ity, and whose innovation is guided by their own experience-not just data. After all, the best intuition takes huge amounts of data, harvesting our entire life experience, and filters it through the hu man brain.
การแปล กรุณารอสักครู่..
