Scott Davidoff is a Principal Research Scientist at the NASA Jet Propulsion Laboratory. His work uses the methods of Human-Centered Design to investigate how scientists and engineers work with high dimensional data to understand both our universe and the robots we use to explore it. He also works to transform how organizations solve technical problems, and has invented problem solving techniques practiced at the world's leading companies and taught at its leading universities.
Broadly influencing Human-Computer Interaction research, Dr. Davidoff serves on program committees for the US National Science Foundation (NSF), the US Office of Naval Research (ONR), and the Association for Computing Machinery’s (ACM) Computer Human-Interaction (CHI) and Designing Interactive Systems (DIS) Conferences. His work has won multiple Best Paper and NASA awards. He also co-founded Virtualitics, the world's first immersive explainable artificial intelligence product.
Dr. Davidoff has a PhD, MS and MHCI in Human-Computer Interaction from Carnegie Mellon University, and an AB in English and Political Theory from Duke University.
[“Adam Coscia,”, “Haley M. Sapers”, “Noah Deutsch”, “Malika Khurana”, “John S. Magyar”, “Sergio A. Parra”, “Daniel R. Utter”, “Rebecca L. Wipfler”, “David W. Caress”, “Eric J. Martin”, “Jennifer B. Paduan”, “Maggie Hendrie”, “Santiago Lombeyda”, “Hillary Mushkin”, “Alex Endert”, “Scott Davidoff”, “Victoria J. Orphan”]
Scientists studying deep ocean microbial ecosystems use limited numbers of sediment samples collected from the seafloor to characterize important life-sustaining biogeochemical cycles in the environment. Yet conducting fieldwork to sample these extreme remote environments is both expensive and time consuming, requiring tools that enable scientists to explore the sampling history of field sites and predict where taking new samples is likely to maximize scientific return. We conducted a collaborative, user-centered design study with a team of scientific researchers to develop DeepSee, an interactive data workspace that visualizes 2D and 3D interpolations of biogeochemical and microbial processes in context together with sediment sampling history overlaid on 2D seafloor maps. Based on a field deployment and qualitative interviews, we found that DeepSee increased the scientific return from limited sample sizes, catalyzed new research workflows, reduced long-term costs of sharing data, and supported teamwork and communication between team members with diverse research goals.
2024-05-11
[“Benjamin Donitz”, “Jasmine Otto”, “Malika Khurana”, “Scott Davidoff”, “Madison Young”, “Matt Heverly”, “Harvey Elliott”]
The Mars Sample Return (MSR) Campaign is a coordinated effort between NASA and ESA to return samples from the surface of Mars. For the first time ever, there would be multiple surface assets operating in a local region, sharing relay passes from a single orbiter. Furthermore, MSR’s constrained surface timeline drives the need for each surface asset to maximize their productivity and not let any sols go to waste. To address these risks, MSR has developed tools and performed careful analysis to architect the relay system to support all MSR surface assets by allocating passes in a deliberate manner that maximizes overall productivity. In this paper, we focus on the Mars Asset Relay Mission Link Allocation Design Environment (MARMALADE), a tool designed by mission systems engineers at JPL to allocate passes to various MSR surface assets to maximize their operational efficiency. The allocations are meant to maximize campaign productivity, and enable the campaign to march towards its primary surface mission goal: to deliver samples to orbit in a timely manner for rendezvous with the Earth Return Orbiter (ERO) and return to Earth. The MARMLADE tool and its results are used to assist mission planners at the campaign- and project-level by providing quantitative sensitivities to hardware trades, such as different telecom architectures, operational trades such as operating times and durations on Mars, and human factors trades such as staffing windows and shifts on Earth. MARMLADE is enabling the development of a sound mission system and operations plan early in the mission phase.
To increase the ability of stakeholders to easily utilize the MARMLADE data and incorporate it into their workflow, the MARMLADE developers collaborated with a data visualization design program to develop MarsIPAN, a visual analytics software tool that allows more straightforward and user- friendly use of the MARMLADE allocations. MarsIPAN displays the allocations for stakeholders to interpret the outcomes of different assumptions, and provides a platform to perform to determine the sensitivity of changes to those assumptions. Integrating MARMLADE and MarsIPAN dramatically increase the overall system utility by making it broadly accessible to users, and eliminates the need for training on how to use the previous Excel-based allocations.
A previous IEEE Aerospace paper described the purpose and development of the MARMLADE tool. This paper will provide an update on MSR Campaign Surface Relay, provide an update on the MARMLADE tool and its capabilities, and introduce the Mars Interactive Pass Allocation Navigator (MarsIPAN) as an accompanying visualization tool. This paper will also discuss how MARMLADE continues to inform trades, and how future schedule optimizers can learn from MARMLADE for adaptation to other missions that require highly coordinated data pipelines.
2024-03-02
[“Albert Eng”, “Rimma Hämäläinen”, “Tomothy Tran”, “Gerry Cruz”, “Ho Nhut”, “Scott Davidoff”, “Li Liu”]
NASA’s Mars 2020 Mission is to study Mars’ habitability and seek signs of past microbial life. The mission uses an X-ray fluorescence spectrometer to identify chemical elements at sub-millimeter scales of the Mars surface. The instrument captures high spatial resolution observations comprised of several thousand individual measured points by raster-scanning an area of the rock surface. This paper will show how different methods, including linear regression, k-means clustering, image segmen- tation, similarity functions, and Euclidean distances, perform when analyzing datasets provided by the X-ray fluorescence spectrometer to assist scientists in understanding the distribution and abundance variations of chemical elements making up the scanned surface. We also created an interactive map to correlate the x-ray spectrum data with a visual image acquired by an RBG camera.
2023-07-18
[“Austin P Wright”, “Peter Nemere”, “Adrian Galvin”, “Duen Horng (Polo) Chau”, “Scott Davidoff”]
While anomaly detection stands among the most important and valuable problems across many scientific domains, anomaly detection research often focuses on AI methods that can lack the nuance and interpretability so critical to conducting scientific inquiry. We believe this exclusive focus on algorithms with a fixed framing ultimately blocks scientists from adopting even high-accuracy anomaly detection models in many scientific use cases. In this application paper we present the results of utilizing an alternative approach that situates the mathematical framing of machine learning based anomaly detection within a participatory design framework. In a collaboration with NASA scientists working with the PIXL instrument studying Martian planetary geochemistry as a part of the search for extra-terrestrial life; we report on over 18 months of in-context user research and co-design to define the key problems NASA scientists face when looking to detect and interpret spectral anomalies. We address these problems and develop a novel spectral anomaly detection toolkit for PIXL scientists that is highly accurate (93.4% test accuracy on detecting diffraction anomalies), while maintaining strong transparency to scientific interpretation. We also describe outcomes from a yearlong field deployment of the algorithm and associated interface, now used daily as a core component of the PIXL science team’s workflow, and directly situate the algorithm as a key contributor to discoveries around the potential habitability of Mars. Finally we introduce a new design framework which we developed through the course of this collaboration for co-creating anomaly detection algorithms: Iterative Semantic Heuristic Modeling of Anomalous Phenomena (ISHMAP), which provides a process for scientists and researchers to produce natively interpretable anomaly detection models. This work showcases an example of successfully bridging methodologies from AI and HCI within a scientific domain, and provides a resource in ISHMAP which may be used by other researchers and practitioners looking to partner with other scientific teams to achieve better science through more effective and interpretable anomaly detection tools.
2023-03-27
[“Rebecca Castano”, “Federico Rossi”, “Tiago Stegun Vaquero”, “Vandi Verma”, “Dan Allard”, “Rashied Amini”, “Anthony Barrett”, “Mathieu Choukroun”, “Scott Davidoff”, “Nihal Dhamani”, “Raymond Francis”, “Mark Hofstadter”, “Bennett Huffmann”, “Michel Ingham”, “Ashkan Jasour”, “Marijke Jorritsma”, “Ellen Van Wyk”, “Gregg Rabideau”]
Future deep-space robotic explorers will use advanced onboard autonomy to address high-priority science questions, e.g., observing fast-changing phenomena and adapting to dynamic environmental circumstances. Onboard autonomy technologies such as planning and scheduling, identification of scientific targets, and content-based data summarization will lead to exciting new deep space science missions. However, traditional operations practices, skills, and processes were not designed for spacecraft with such onboard autonomous capabilities. This paper summarizes the results of a two-year investigation conducted at JPL to explore how ground operations processes, practices, and tools will need to adapt to support effective use of onboard autonomy. In particular, we identify areas where current workflows and tools will need to be enhanced to accommodate commanding and analysis onboard planning and scheduling software for deep space exploration. Our focus is on onboard planning and scheduling: we identify the required changes necessary to enable operators and scientists to convey their desired intent to future autonomous spacecraft’s planning and execution systems via goals and priorities rather than sequences of commands, and to be able to reconstruct and explain the decisions made onboard and the state of the spacecraft - providing a practical path to users trusting the autonomy, which is one of the most significant barriers to full adoption. Collectively, these results form key steps toward adoption of onboard spacecraft autonomy, which will enable new, bolder exploration of the outer solar system, small bodies, and the surface of ocean worlds.
@inproceedings{castano-et-al-spaceops2023, author = {Castano, Rebecca and Rossi, Federico and Stegun Vaquero, Tiago and Verma, Vandi and Allard, Dan and Amini, Rashied and Barrett, Anthony and Choukroun, Mathieu and Davidoff, Scott and Dhamani, Nihal and Francis, Raymond and Hofstadter, Mark and Huffmann, Bennett and Ingham, Michel and Jasour, Ashkan and Jorritsma, Marijke and Van Wyk, Ellen and Rabideau, Gregg}, title = {Operating Deep Space Autonomous Spacecraft: Ground Processes and Tools for Operability and Trust}, booktitle = {International Conference on Space Operations (SpaceOps)}, year = {2023}, month = {March}, address = {Dubai, UAE}, clearance = {CL #23-0810 URS314296 }, url = {https://ai.jpl.nasa.gov/public/documents/papers/castano-et-al-spaceops2023.pdf}, project = {OperationsForAutonomy} }
2023-03-06
[“Federico Rossi”, “Dan A. Allard”, “Rashied Amini”, “Tiago Stegun Vaquero”, “Nihal N. Dhamani”, “Mathieu Choukroun”, “Vandi Verma”, “Marijke Jorristma”, “Scott Davidoff”, “Raymond Francis”, “Ellen Van Wyk”, “Ashkan Jasour”, “Mark Hofstadter”, “Bennett W. Huffman”, “Anthony C. Barrett”, “Michel D. Ingham”, “Rebecca Castano”]
Autonomous planning and scheduling is a key enabling technology for future robotic Solar System explorers: as missions venture farther in the Solar System, light-speed delays and low available bandwidth make on-board autonomy increasingly attractive to maximize science returns and enable otherwise-infeasible observations of transient phenomena, e.g. storms on gas giants and plumes on icy worlds. However, ground operations of future autonomous explorers will require a paradigm shift, moving from the current practice of specifying timed sequences of commands to specifying high-level goals that on-board autonomy should elaborate based on the spacecraft’s state and on the sensed environment. In this paper, we explore the problem of adapting ground operations processes, roles, and tools to accommodate on-board planning and scheduling. We design and prototype a framework of user interfaces and algorithmic tools to support uplink and downlink processes of future autonomous spacecraft. The framework’s goals are to allow scientists and engineers to both convey their desired intent to the spacecraft in a format compatible with the on-board planner, and reconstruct and explain the decisions made on-board and their impact on the state of the spacecraft. We assess the performance of the framework through a design simulation where JPL scientists and operators simulate realistic operations of an Ice Giant multi-flyby mission concept, aided by the proposed framework. The design simulation confirms that the proposed approach holds promise to enable operators to interact with on-board autonomy, and suggests a number of recommendations for the next generation of operations tools supporting autonomous spacecraft.
2023-03-04
[“Maggie Hendrie”, “Hillary Mushkin”, “Santiago V. Lombeyda”, “Scott Davidoff”]
Data visualization frequently provides audiences with novel semantic and computational presentations. How does a multifaceted team expand this scope by harnessing the power of visualization as a tool to think with? The NASA JPL/Caltech/ ArtCenter data visualization program demonstrates how scientific knowledge, shaped from data and theory, is equally co-constructed from diverse human perspectives. We will share case studies from Mars Rover Path planning and PIXLISE, a visual reasoning tool for understanding planetary geology. Working from source data through mixed media artifacts, these projects explore co-design methods for complex scientific domains with real-world applications. Our methodology emphasizes that all participants in the co-design process are both learners and experts. In this dynamic, the design and coding process are unique modes of critical discovery.
2022-12-01
[“Michael M. Tice”, “Joel A. Hurowitz”, “Abigail C. Allwood”, “Michael W. M. Jones”, “Brendan J. Orenstein”, “Scott Davidoff”, “Austin P. Wright”, “David A.K. Pedersen”, “Jesper Henneke”, “Nicholas J. Tosca”, “Kelsey R. Moore”, “Benton C. Clark”, “Scott M. McLennan”, “David T. Flannery”, “Andrew Steele”, “Adrian J. Brown”, “Maria-Paz Zorzano”, “Keyron Hickman-Lewis”, “Yang Liu”, “Scott J. VanBommel”, “Mariek E. Schmidt”, “Tanya V. Kizovski”, “Allan H. Treiman”, “Lauren O’Neil”, “Alberto G. Fairén”, “David L. Shuster”, “Sanjeev Gupta”, “The PIXL Team”]
Collocated crystal sizes and mineral identities are critical for interpreting textural relationships in rocks and test- ing geological hypotheses, but it has been previously impossible to unambiguously constrain these properties using in situ instruments on Mars rovers. Here, we demonstrate that diffracted and fluoresced x-rays detected by the PIXL instrument (an x-ray fluorescence microscope on the Perseverance rover) provide information about the presence or absence of coherent crystalline domains in various minerals. X-ray analysis and multispectral imaging of rocks from the Séítah formation on the floor of Jezero crater shows that they were emplaced as coarsely crys- talline igneous phases. Olivine grains were then partially dissolved and filled by finely crystalline or amorphous secondary silicate, carbonate, sulfate, and chloride/oxychlorine minerals. These results support the hypothesis that Séítah formation rocks represent olivine cumulates altered by fluids far from chemical equilibrium at low water-rock ratios.
2022-11-23
[“Yang Liu”, “Michael M. Tice”, “Mariek E. Schmidt”, “Allan H. Treiman”, “Tanya V. Kizovski”, “Joel A. Hurowitz”, “Abigail C. Allwood”, “Jesper Henneke”, “David A.K. Pedersen”, “Scott J. VanBommel”, “Michael W. M. Jones”, “Abigail L. Knight”, “Brendan J. Orenstein”, “Benton C. Clark”, “W. Timothy Elam”, “Christopher M. Heirwegh”, “Tom Barber”, “Luther W. Beegle”, “Karim Benzerara”, “Sylvain Bernard”, “Olivier Beyssac”, “Tanja Bosak”, “Adrian J. Brown”, “Emily L. Cardarelli”, “David C. Catling”, “John Christian”, “Edward Cloutis”, “Barbara Cohen”, “Scott Davidoff”, “Alberto G. Fairén”, “Kenneth A. Farley”, “David T. Flannery”, “Adrian Galvin”, “John P. Grotzinger”, “Sanjeev Gupta”, “James Hall”, “Christopher D. K. Herd”, “Keyron Hickman-Lewis”, “Robert P. Hodyss”, “Briony H. N. Horgan”, “Jeffrey R. Johnson”, “John Leif Jørgensen”, “Linda C. Kah”, “Justin N. Maki”, “Lucia Mandon”, “Nicolas Mangold”, “Francis M. McCubbin”, “Scott M. McLennan”, “Kelsey R. Moore”, “Marion Nachon”, “Peter Nemere”, “Luke D. Nothdurft”, “Jorge I. Núñez”, “Lauren O’Neil”, “Cathy M. Quantin-Nataf”, “Violaine Sautter”, “David L. Shuster”, “Kirsten L. Siebach”, “Justin I. Simon”, “Kimberly P. Sinclair”, “Kathryn M. Stack”, “Andrew Steele”, “Jesse D. Tarnas”, “Nicholas J. Tosca”, “Kyle Uckert”, “Arya Udry”, “Lawrence A. Wade”, “Benjamin P. Weiss”, “Roger C. Wiens”, “Kenneth H. Williford”, “Maria-Paz Zorzano”]
The geological units on the floor of Jezero crater, Mars, are part of a wider regional stratigraphy of olivine-rich rocks, which extends well beyond the crater. We investigated the petrology of olivine and carbonate-bearing rocks of the Séítah formation in the floor of Jezero. Using multispectral images and x-ray fluorescence data, acquired by the Perseverance rover, we performed a petrographic analysis of the Bastide and Brac outcrops within this unit. We found that these outcrops are composed of igneous rock, moderately altered by aqueous fluid. The igneous rocks are mainly made of coarse-grained olivine, similar to some martian meteorites. We interpret them as an olivine cumulate, formed by settling and enrichment of olivine through multistage cooling of a thick magma body.
2022-08-25
[“Rebecca Castano”, “Tiago Vaquero”, “Federico Rossi”, “Vandi Verma”, “Ellen Van Wyk”, “Dan Allard”, “Bennett Huffman”, “Erin M. Murphy”, “Nihal Dhamani”, “Robert A. Hewitt”, “Scott Davidoff”, “Rashied Amini”, “Anthony Barrett”, “Julie Castillo-Rogez”, “Steve A. Chien”, “Mathieu Choukroun”, “Alain Dadaian”, “Raymond Francis”, “Benjamin Gorr”, “Mark Hofstadter”, “Mitch Ingham”, “Cristina Sorice”]
Onboard autonomy technologies such as planning and scheduling, identification of scientific targets, and content-based data summarization, will lead to exciting new space science missions. However, the challenge of operating missions with such onboard autonomous capabilities has not been studied to a level of detail sufficient for consideration in mission concepts. These autonomy capabilities will require changes to current operations processes, practices, and tools. We have developed a case study to assess the changes needed to enable operators and scientists to operate an autonomous spacecraft by adopting a common model between the ground personnel and the onboard algorithms. We assess the new operations tools and workflows necessary to enable operators and scientists to convey their desired intent to the spacecraft, and to be able to reconstruct and explain the decisions made onboard and the state of the spacecraft. Mock-ups of these tools were used in a user study to understand the effectiveness of the processes and tools in enabling a shared framework of understanding, and the ability of the operators and scientists to effectively achieve mission science objectives.
inproceedings{castano-et-al-ieeeaero2022 author={Castano, Rebecca and Vaquero, Tiago and Rossi, Federico and Verma, Vandi and Van Wyk, Ellen and Allard, Dan and Huffmann, Bennett and Murphy, Erin M. and Dhamani, Nihal and Hewitt, Robert A. and Davidoff, Scott and Amini, Rashied and Barrett, Anthony and Castillo-Rogez, Julie and Choukroun, Mathieu and Dadaian, Alain and Francis, Raymond and Gorr, Benjamin and Hofstadter, Mark and Ingham, Michel and Sorice, Cristina and Tierney, Iain}, booktitle={2022 IEEE Aerospace Conference (AERO)}, title={Operations for Autonomous Spacecraft}, year={2022}, volume={}, number={}, pages={1-20}, keywords={Space vehicles;Space missions;Prototypes;User interfaces;Predictive models;Planning;Personnel}, doi={10.1109/AERO53065.2022.9843352} }
2022-03-05
[“Maggie Hendrie”, “Hillary Mushkin”, “Santiago V. Lombeyda”, “Scott Davidoff”]
The Data to Discovery program investigates how experimental interdisciplinary collaborations in computing, art, co-design and research enable and critically engage scientific discovery. This NASA JPL/Caltech/ ArtCenter College of Design collaboration frames interactive data visualization in science and engineering as a unique transdisciplinary practice and mode of knowledge production. Our presentation addresses two key design objectives: how to best design interactive data visualization systems in complex scientific domains with real-world, mission critical applications. And, how to develop a methodology, based on interdisciplinary best practices, that addresses diverse processes and application domains to prioritize learning for interns and discovery for scientists and engineers.
2021-05-25
David O. Flannery,Scott Davidoff,Michael M. Tice,Abigail C. Allwood,William Timothy Elam,Christopher M. Heirwegh,Joel A. Hurowitz,Yang Liu,Peter Nemere
Mars rovers of the future will be fitted with increasingly complex context and sample analysis instrumentation. In the case of Mars 2020, the contact science payload is optimised for pet- rography. Two complimentary lithochemistry instruments on the rover’s arm are designed to generate up to tens of thousands of spectral measurements per experiment. These data are generated in addition to microscale context imaging, housekeeping data, and contextual information produced by additional sensors onboard the rover. One consequence of these technological advances is a higher fidelity experience for geologists who may be able to place themselves within virtual field environments. However, mission scientists are grappling with the same challenges as cutting edge of terrestrial investigations: the ability to generate vast datasets, from the nano- to macroscale, is advancing faster than the ability to organise, analyse and share these data. A single sample may yield gigapixel contextual images, microscale topographic models, and huge hyperspectral datasets generated by diverse analytical techniques that are difficult to cross-reference and synthesize.
For Mars rover missions, this problem is further compounded by the often-unusual characteris- tics of flight instruments, which are designed to maximise measurement flexibility and survival in harsh environments, in contrast to the ideal geometries and sample formats prevailing in laboratory set-ups. Furthermore, tight turnarounds are required for the analysis of downlinked datasets, as the time windows during which a platform can be commanded to investigate any given target are limited, and experimental results may affect key operational decisions. This is particularly true for the Mars 2020 mission, which aims to significantly improve the efficiency of rover operations in order to complete an ambitious sample selection, documentation and caching plan during the prime mission in Jezero Crater.
To address this problem, we explored novel approaches to increasing the efficiency of data analysis tasks performed by the globally-distributed Planetary Instrument for X-ray Lithochemistry (PIXL) Science Team. PIXL is an X-ray fluorescence instrument flying aboard the robotic arm of the Mars 2020 rover. In contrast to previously flown XRF instruments (e.g. XRFS, APXS) that have performed elemental chemistry measurements over regions several centimetres in diameter, PIXL will be the first such instrument to generate micron-scale elemental chemistry maps. As well as the peaks expected from the fluorescence of major and minor elements in rocks and soils, and scattered primary radiation, x-ray spectra may also include peaks related to the presence of exotic/unpredicted elements, x-ray diffraction (due to the passage of x-rays through the atomic lattices of minerals), and other spectral artifacts. PIXL scientists desire visualization tools that allow the rapid identification of features such as these that may be hidden within the very large datasets generated by the instrument.
An ongoing collaboration between astrobiologists, computer scientists, and computer-human- interaction researchers at the Queensland University of Technology (Australia) and the NASA Jet Propulsion Laboratory has culminated in a software platform (PIXLISE) that will be used to analyse PIXL data returned during Mars 2020 surface operations. PIXLISE users manipulate large, multi-spectral datasets via web browsers and a cloud-based, shareable user session architecture. Innovative data visualization approaches minimise human-in-the-loop process- ing tasks. For example, a dimensionality reduction technique known as t-SNE is used to autonomously group spectral populations into rock components, such as grains, veins and cements.
PIXLISE interfaces with PIQUANT, a quantitative XRF measurement model based on the fun- damental parameters approach developed at the University of Washington. PIXL scientists rapidly iterate quantitative calculations by passing spectral populations and model param- eters between the PIXLISE and PIQUANT instances running in the cloud. By improving the efficiency of data analysis tasks, and distributing PIQUANT computation across multiple virtual-machines, processing times for complex PIXL experiments have been reduced from days to minutes.
2021-01-31
Abigail C. Allwood,Lawrence A. Wade,Marc C. Foote,William Timothy Elam,Joel A. Hurowitz,Steven Battel,Douglas E. Dawson,Robert W. Denise,Eric M. Ek,Martin S. Gilbert,Matthew E. King,Carl Christian Liebe,Todd Parker,David A. K. Pedersen,David P. Randall,Robert F. Sharrow,Michael E. Sondheim,George Allen,Kenneth Arnett,Mitchell H. Au,Christophe Basset,Mathias Benn,John C. Bousman,Robert J. Calvet,Luca Cinquini,Benton Clark,Sterling Conaby,Henry A. Conley,Scott Davidoff,Jenna Delaney,Troelz Denver,Ernesto Diaz,Gary B. Doran,Joan Ervin,Michael Evans,David O. Flannery,Ning Gao,Johannes Gross,John Grotzinger,Brett Hannah,Jackson T. Harris,Cathleen M. Harris,Christopher M. Heirwegh,Christina Hernandez,Eric Hertzberg,Robert P. Hodyss,James R. Holden,Christopher Hummel,Matthew A. Jadusingh,John L. Jørgensen,Jonathan H. Kawamura,Amarit Kitiyakara,Kris Kozaczek,James L. Lambert,Peter R. Lawson,Yang Liu,Kristen M. Macneal,Scott McLennan,Patrick McNally,Patrick L. Meras,Jamie Napoli,Brett J. Naylor,Peter Nemere,Napat Pootrakul,Raul A. Romero,Rogelio Rosas,Jared Sachs,Michael E. Schein,Timothy P. Setterfield,Vritika Singh,Eugenie Song,Mary M. Soria,Nicholas R. Tallarida,David R. Thompson,Michael M. Tice,Lars Timmermann,Violet Torossian,Allan Treiman,Shihchuan Tsai,Kyle Uckert,Juan Villalvazo,Mandy Wang,Daniel W. Wilson,Shana C. Worel,Payam Zamani,Mike Zappe,Richard Zimmerman
Planetary Instrument for X-ray Lithochemistry (PIXL) is a micro-focus X-ray fluorescence spectrometer mounted on the robotic arm of NASA’s Perseverance rover. PIXL will acquire high spatial resolution observations of rock and soil chemistry, rapidly analyzing the elemental chemistry of a target surface. In 10 seconds, PIXL can use its powerful 120 μm-diameter X-ray beam to analyze a single, sand-sized grain with enough sensitivity to detect major and minor rock-forming elements, as well as many trace elements. Over a period of several hours, PIXL can autonomously raster-scan an area of the rock surface and acquire a hyperspectral map comprised of several thousand individual measured points. When correlated to a visual image acquired by PIXL’s camera, these maps reveal the distribution and abundance variations of chemical elements making up the rock, tied accurately to the physical texture and structure of the rock, at a scale comparable to a 10X magnifying geological hand lens. The many thousands of spectra in these postage stamp-sized elemental maps may be analyzed individually or summed together to create a bulk rock analysis, or subsets of spectra may be summed, quantified, analyzed, and compared using PIXLISE data analysis software. This hand lens-scale view of the petrology and geochemistry of materials at the Perseverance landing site will provide a valuable link between the larger, centimeter- to meter-scale observations by Mastcam-Z, RIMFAX and Supercam, and the much smaller (micron-scale) measurements that would be made on returned samples in terrestrial laboratories.
2020-11-19
[“Sandra Bae”, “Federico Rossi”, “Joshua Vander Hook”, “Scott Davidoff”, “Kwan-Liu Ma”]
Autonomous multi-robot systems, where a team of robots shares information to perform tasks that are beyond an individual robot’s abilities, hold great promise for a number of applications, such as planetary exploration missions. Each robot in a multi-robot system that uses the shared-world coordination paradigm autonomously schedules which robot should perform a given task, and when, using its worldview–the robot’s internal representation of its belief about both its own state, and other robots’ states. A key problem for operators is that robots’ worldviews can fall out of sync (often due to weak communication links), leading to desynchronization of the robots’ scheduling decisions and inconsistent emergent behavior (e.g., tasks not performed, or performed by multiple robots). Operators face the time-consuming and difficult task of making sense of the robots’ scheduling decisions, detecting de-synchronizations, and pinpointing the cause by comparing every robot’s worldview. To address these challenges, we introduce MOSAIC Viewer, a visual analytics system that helps operators (i) make sense of the robots’ schedules and (ii) detect and conduct a root cause analysis of the robots’ desynchronized worldviews. Over a year-long partnership with roboticists at the NASA Jet Propulsion Laboratory, we conduct a formative study to identify the necessary system design requirements and a qualitative evaluation with 12 roboticists. We find that MOSAIC Viewer is faster- and easier-to-use than the users’ current approaches, and it allows them to stitch low-level details to formulate a high-level understanding of the robots’ schedules and detect and pinpoint the cause of the desynchronized worldviews.
2020-10-25
David Schurman,Pooja Nair,Scott Davidoff,Adrian Galvin,Abigail Allwood,Yang Liu,David Flannery,Robert P Hodyss,Santiago Lombeyda,Maggie Hendrie,Hillary Mushkin,Christopher Heirwegh
Among the Mars 2020 Rover’s primary science objectives is the use of micro-X-ray fluorescence (microXRF) spectroscopy to search for biological mineral indicators. When interpreting microXRF data, astrobiologists must extrapolate mineral composition from elemental abundance. Current methods of analysis often rely on time-intensive practices such as manual spectrum peak labeling, command-line visualization tools, and producing standalone heatmaps of a sample area to determine spatial mineral distributions. Further, as data volume increases with sensor resolution, inefficient visualization techniques can easily obscure minute yet geochemically significant features within a sample. In conjunction with the Planetary Instrument for X-ray Lithochemistry (PIXL) science team, we have developed a data management, analysis, and visualization tool that improves the speed, fluidity, and effectiveness of analyzing microXRF data for astrobiology. Our tool, called PIXL Element Analysis Technology (PIXELATE), leverages novel advances in front-end rendering, data manipulation, and interactivity––as well as statistical and machine learning methods––in an effort to explore the application of modern data science methods to the PIXL research process.
2019-06-24
Basak Alper Ramaswamy,Jagriti Agrawal,Wayne Chi,So Young Kim-Castet,Scott Davidoff, Steve Chien
Automation is gaining momentum in spacecraft operations, however, at a much slower pace than comparable application domains. The reasons behind slow adoption is (1) the need for high reliability and (2) the limited interaction between human operators and the automated systems. For automated systems to be adopted and trusted by humans, humans need to gain intuition about the decision making process of the automated system and trust in its execution. In this paper, we present how simulation and visualization can enhance adoption of an automated on-board activity scheduling system, specifically in the context of Mars2020 rover mission. The visualization aims to communicate to the users degree of variance and uncertainty in possible schedule execution. Our preliminary validation results suggest that the proposed visualization increases operators’ confidence in—and likelihood of adopting—the automated scheduling system.
2019-01-06
Jarod Tristan Boone,Mika Tosca,Adrian Galvin,Abigail Nastan,David Schurman,Pooja Nair,Scott Davidoff,Santiago V. Lombeyda,Hillary Mushkin,Maggie Hendrie
Data accessibility, or the ability to access, manipulate, and visualize data, is a critical component in exploratory scientific research. Studies relating to interactions between fire and climate particularly rely on data accessibility, as they often require data from many different sources. If accessing data is difficult, investigation becomes tedious and unattractive. To illustrate this issue and offer a case-study solution, we have redesigned the access and visualization interface for the NASA “MISR” Fire Plume dataset. This dataset, originating from NASA’s Terra satellite, provides a unique measure of fire plume height and location around the world. By performing an extensive interview process and incorporating principles of human-centered design, we were able to turn the previously-cumbersome MISR interface into an intuitive and accessible system. Our results indicate that such an approach encourages research and investigation, and could be used to increase the use of many overlooked datasets.
2018-12-14
S. George Djorgovski,Ciro Donalek,Santiago Lombeyda,Scott Davidoff,Michael Amori
The key challenges of actionable knowledge discovery come from the data complexity, which often manifests as a high dimensionality of data spaces, where each measured quantity adds a new axis. The key bottleneck in the data analysis and understanding is an effective visualization; it is a bridge between the quantitative content of the data, and the human intuition and pattern recognition. Visualization is essential at every step: from data cleaning and preparation, the choice of appropriate analysis algorithms, the interpretation of their output, and the final presentation of the results. The premium is in visualizing effectively as many dimensions of the data space as possible: if there are structures in the data that involve multiple variables, e.g., clusters, correlations, gaps, outliers, etc., projecting them on a standard 2D plot hides or destroys such information. Virtual Reality (VR) offers a powerful new platform for an effective visualization of high-dimensionality data spaces, where the users (who can be geographically separated) can interact with the data, machine intelligence and analysis tools, and collaborate with each other in a shared virtual environment. Immersion amplifies the ability to grasp the relationships and patterns that may be present in the data, more effectively than in any traditional visualization platform, and even discover patterns that are impossible to see in any other way. We have developed an innovative data visualization and analytics platform that combines a multi-dimensional data visualization with a variety of Machine Learning tools, and that enables a collaborative data visualization in VR, using commodity VR headsets. We will illustrate its application on a variety of practical examples, and provide some comparisons of its effectiveness as compared to the traditional data visualization and analytics tools.
2018-12-14
Matthew Conlen,Sara Stalla,Chelly Jin,Maggie Hendrie,Hillary Mushkin,Santiago V. Lombeyda,Scott Davidoff
Operations engineering teams interact with complex data systems to make technical decisions that ensure the operational efficacy of their missions. To support these decision-making tasks, which may require elastic prioritization of goals dependent on changing conditions, custom analytics tools are often developed. We were asked to develop such a tool by a team at the NASA Jet Propulsion Laboratory, where rover telecom operators make decisions based on models predicting how much data rovers can transfer from the surface of Mars. Through research, design, implementation, and informal evaluation of our new tool, we developed principles to inform the design of visual analytics systems in operations contexts. We offer these principles as a step towards understanding the complex task of designing these systems. The principles we present are applicable to designers and developers tasked with building analytics systems in domains that face complex operations challenges such as scheduling, routing, and logistics.
2018-04-21
Saul Teukolsky,Michele Vallisneri,Rachel Akeson,Anne Archibald,Stanislav Babak,Katelyn Breivik,C. Titus Brown,Alvin Chua,Neil Cornish,Curt Cutler,Scott Davidoff,Francois Foucart,Chad Galley,Lawrence Kidder,Prayush Kumar,Astrid Lamberts,Geoffrey Lovelace,Ashish Mahabal,Christine Corbett Moran,Laura Nuttall,Maria Okounkova,Travis Robson,Mark Scheel,Deirdre Shoemaker,Stephen Taylor,Vijay Varma,Alberto Vecchio
The space-based gravitational-wave (GW) observatory LISA will offer unparalleled science returns, including a view of massive black hole mergers up to high redshifts, precision tests of general relativity and black hole structure, a census of thousands of compact binaries in the Galaxy, and the possibility of detecting stochastic signals from the early Universe. While the Mock LISA Data Challenges (2006–2011) gave us confidence that LISA will be able to fulfill its scientific potential, we still have a rather incomplete idea of what the end-to-end LISA science analysis should look like. The task at hand is substantial:
It is tempting to assume that current algorithms and prototype codes will scale up to this challenge, thanks to the greatly increased computational power that will become available by the time of LISA’s launch in the early 2030s. In reality, harnessing that power will require very different methods, adapted to future high-performance computational architectures that we can only glimpse now. Thus, we need to begin our exploration at this time, seeking inspiration from other disciplines (e.g., big data processing, computational biology, the most advanced applications in astroinformatics) and learning to pose the same physical questions in different, future-proof ways—or even daring to imagine questions that will be tractable only with future machines. The broad objective of this study was to imagine how evolved or rethought data analysis algorithms and source-modeling codes will perform the LISA science analysis on the computers of the future, with the hope of guiding LISA science and data analysis research and development in the years to come.
2018-01-16
S. George Djorgovski,Ciro Donalek,Scott Davidoff,Vicente Estrada
Data visualization systems and methods for generating 3D visualizations of a multidimensional data space are described. In one embodiment a 3D data visualization application directs a processing system to:load a set of multidimensional data points into a visualization table; create representations of a set of 3D objects corresponding to the set of data points;receive mappings of data dimensions to visualization atributes; determine the visualization atributes of the set of 3D objects based upon the selected mappings of datad imensions to 3D object atributes; update a visibility dimension in the visualization table for each of the plurality of 3D object to reflect the visibility of each 3D object based upon the selected mappings of data dimensions to visualization atributes; and interactively render 3D data visualizations of the 3D objects within the virtual space from viewpoints determined based upon received user input.
2017-05-30
Alexandra Holloway,Bryan Duran,Scott Davidoff,E. Jay Wyatt,Sourjya Roy Sinha
The Deep Space Network (DSN) is a collection of three sites around the globe. The positioning of the sites, 120° longitude apart, allows at least one site to see every patch of sky at all times, thus facilitating continuous coverage for any deep space spacecraft that partners with the DSN. One of the key people charged with the Deep Space Network support activities is the Link Control Operator (LCO). Among other duties, the LCO prepares the necessary hardware to track the spacecraft, tracks the spacecraft, and stows and tears down the equipment links after the track has passed. Currently, a single LCO monitors and commands one or two simulta- neous supports. However, the number of supports will increase to three or more within the next five years under the DSN Follow the Sun initiative. This work investigated how link control operators monitor supports. With the current system, the subsystem details reside on each subsystem display. Operators scan across six physical monitors to find necessary information to monitor their supports. To address the identified need of an aggregate low-level summary display, a display called the Postage Stamp was developed. LCOs identified a number of critical pieces of information that lead to an understanding of support health, and incorporated all of these elements in a single display.
2017-01-01
Matthew Doyle,Roger S. Taylor,Shashi Kanbur,Damian Schofield,Ciro Donalek,S. George Djorgovski,Scott Davidoff
This study investigates the efficacy of a 3D visualization application used to classify various types of stars using data derived from large synoptic sky surveys. Evaluation methodology included a cognitive walkthrough that prompted participants to identify a specific star type (Supernovae, RR Lyrae or Eclipsing Binary) and retrieve variable information (MAD, magratio, amplitude, frequency) from the star. This study also implemented a heuristic evaluation that applied usability standards such as the Shneiderman Visual Information Seeking Mantra to the initial iteration of the application. Findings from the evaluation indicated that improvements could be made to the application by developing effective spatial organization and implementing data reduction techniques such as linking, brushing, and small multiples.
2016-01-04
S. George Djorgovski,Ciro Donalek,Scott Davidoff,Santiago Lombeyda
An effective visualization of complex and high-dimensionality data sets is now a critical bottleneck on the path from data to discovery in all fields. Visual pattern recognition is the bridge between human intuition and understanding, and the quantitative content of the data and the relationships present there (correlations, outliers, clustering, etc.). We are developing a novel platform for visualization of complex, multi-dimensional data, using immersive virtual reality (VR), that leverages the recent rapid developments in the availability of commodity hardware and development software. VR immersion has been shown to significantly increase the effective visual perception and intuition, compared to the traditional flat-screen tools. This allows to more easily perceive higher dimensional spaces, with an advantage for a visual exploration of complex data compared to the traditional visualization methods. Immersive VR also offers a natural way for a collaborative visual exploration of data, with multiple users interacting with each other and with their data in the same perceptive data space.
2015-12-18
Abdelwahab Bourai,Sarah Churng,Conrad Egan,Hillary Mushkin,Maggie Hendrie,Scott Davidoff
Functional connections within the brain can be revealed through functional magnetic resonance imaging (fMRI), which shows simultaneous activations of blood flow in the brain during response tests. However, fMRI specialists currently do not have a tool for visualizing the complex data that comes from fMRI scans. They work with correlation matrices that table what functional region connections exist, but they have no corresponding visualization. FMReye is a graph network visualization tool that relies on a technique developed in computer science to support the process of interactive exploration. Using the “brushing” technique, the user can examine the same data from multiple perspectives, and multiple levels of abstraction at the same time. The Web application loads a correlation matrix of fMRI data and demonstrates three levels of abstraction within a multi-view display. The first level is the exploratory view, which is a representative 3D rotatable model of the connections between functional regions. Anatomical landmarks provide the contextual clues for spatial orientation.
2015-03-01
[“Scott Davidoff”, “Jesse Kriss”]
Highlights reel from the EYEO 2015 data visualization conference
2015-01-01
S. George Djorgovski,Daniel J. Crichton,Richard Doyle,Ciro Donalek,Ashish Mahabal,Matthew J. Graham,Amy J. Braverman,Scott Davidoff,Chris A. Mattmann,David R. Thompson,Thomas Fuchs
Big data computational skills are essential for the data-intensive research in the 21stcentury. We need a workforce and researchers trained in such skills. However, most universities do not yet have adequate curricula in this arena. There is a huge pent-up demand for this type of instruction. We describe some of our experiences in designing and teaching a graduate level curriculum on the methodologies of computational science at Caltech, and offer some opinions on the subject in a broader context of the transformation of the academia, including: the on-line approach is effective, scalable, and it can replace the traditional advanced summer schools that require travel, time, and money; there are still no effective, affordable platforms for video interactions that involve tens of students; a combination of a text chat (e.g., a Google hangout and the instructor on a video, responding to the questions, is adequate, and; many students show less commitment and dedication that they would in a traditional physical setting
2014-12-17
Ciro Donalek,S. George Djorgovski,Alex Cioc,Anwell Wang,Jerry Zhang,Elizabeth Lawler,Stacy Yeh,Ashish Mahabal,Matthew J. Graham,Andrew J. Drake,Scott Davidoff,Jeffrey S. Norris,Giuseppe Longo
Effective data visualization is a key part of the discovery process in the era of “big data”. It is the bridge between the quantitative content of the data and human intuition, and thus an essential component of the scientific path from data into knowledge and understanding. Visualization is also essential in the data mining process, directing the choice of the applicable algorithms, and in helping to identify and remove bad data from the analysis. However, a high complexity or a high dimensionality of modern data sets represents a critical obstacle. How do we visualize interesting structures and patterns that may exist in hyper-dimensional data spaces? A better understanding of how we can perceive and interact with multidimensional information poses some deep questions in the field of cognition technology and human-computer interaction. To this effect, we are exploring the use of immersive virtual reality platforms for scientific data visualization, both as software and inexpensive commodity hardware. These potentially powerful and innovative tools for multi-dimensional data visualization can also provide an easy and natural path to a collaborative data visualization and exploration, where scientists can interact with their data and their colleagues in the same visual space. Immersion provides benefits beyond the traditional “desktop” visualization tools: it leads to a demonstrably better perception of a datascape geometry, more intuitive data understanding, and a better retention of the perceived relationships in the data.
2014-10-27
[“Hai Nguyen”, “Luca Cinquini”, “Scott Davidoff”, “Bryan Duran”, “Annmarie Eldering”, “Robert Granat”, “Michael R. Gunson”, “James P. Hoffman”, “Brian Knosp”, “Erin M. Murphy”, “Gregory B. Osterman”, “Paul Zimdars”]
CO2 is an important greenhouse gas and therefore characterizing and understanding its global distribution is crucial for the study of Earth’s changing climate. Currently, satellite remote sensing measurements of CO2 are available from the Greenhouse gases Observing SATellite (GOSAT), Atmospheric InfraRed Sounder (AIRS), Orbiting Carbon Observatory 2 (OCO-2), and Tropospheric Emission Spectrometer (TES). Traditionally, data from these different missions are distributed separately from one another and they each possess different data formats, making it cumbersome for researchers to access, analyze, and perform inter-comparison.
We present an effort at JPL to design a web-based science data environment (co2.jpl.nasa.gov) that allows users to access and utilize CO2 data from GOSAT, AIRS, OCO-2, TES, and the ground-based Total Carbon Column Observing Network (TCCON) in a single user-friendly interface. The features of the data environment include the ability to download full mission-specific CO2-related Level 2 data files or to customize them based on location, time, data variable, version, and format. An important feature of the JPL CO2 data environment is that it allows generation of customized Level 3 products and provides detailed documentation on the mission specifications along with technical data information. These tools are designed to allow users streamlined access to relevant remote sensing and ground-based CO2 datasets in order to facilitate research on atmospheric CO2.
2014-05-25
Jonathan Bidwell,Alexandra Holloway,Scott Davidoff
Many tasks call for efficient user interaction under time delay – controlling space instruments, piloting remote aircraft and operating search and rescue robots. In this paper we identify an underexplored design opportunity for building robotic teleoperation user interfaces following an evaluation of operator performance during a time-delayed robotic arm block-stacking task in twenty-two participants. More delay resulted in greater operator hesitation and a decreased ratio of active to inactive input. This ratio can serve as a useful proxy for measuring an operator’s ability to anticipate the outcome of their control inputs before receiving delayed visual feedback. High anticipatory input ratio (AIR) scores indicate times when robot operators enter commands before waiting for visual feedback. Low AIR scores highlight when operators must wait for visual feedback before continuing. We used this measurement to help us identify particular sub-tasks where operators would likely benefit from additional support.
2014-04-26
Jeff Norris,Scott Davidoff
NASA’s Telexploration Project seeks to make us better explorers by building immersive environments that feel like we are really there. The Mission Operations Innovation Office and its Operations Laboratory at the NASA Jet Propulsion Laboratory (JPL) founded the Telexploration Project, and is researching how immersive visualization and natural human-robot interaction can enable mission scientists, engineers, and the general public to interact with NASA spacecraft and alien environments in a more effective way. These efforts have been accelerated through partnerships with many different companies, especially in the video game industry. These demos will exhibit some of the progress made at NASA and its commercial partnerships by allowing attendees to experience Mars data acquired from NASA spacecraft in a head mounted display using several rendering and interaction techniques.
2014-03-29
Daniel Barella,Sarah Churng,Conrad Egan,Rashad Moarref,Mitul Luhar,Hillary Mushkin,Scott Davidoff,Maggie Hendrie,Beverley J. McKeon
In recent work we have demonstrated that key features of wall turbulence can be captured by an input-output relationship between nonlinear forcing and velocity response, where the transfer function, constructed in wavenumber-frequency space, is named the resolvent. A basis for the wall-normal direction can be obtained by singular value decomposition of the resolvent, where the singular functions (or resolvent modes) represent the most amplified velocity response to the “most dangerous” input forcing. As such, a low-rank approximation can be obtained by retaining a limited number of singular functions; recognizable statistical and structural features of wall turbulence can be identified in even the rank-1 approximation, in which only the principal singular functions are investigated.
2013-11-25
Oleg Sindiy,Krystof Litomisky,Scott Davidoff,Frank Dekens
One of the barriers to the success of Model-Based Systems Engineering (MBSE) efforts is realizing effective communication of the output diagrams—i.e., modeling views—that address the concerns of, and inform, a broad spectrum of customer stakeholders. Abstracting and implementing the visual presentation of views—as products of very complex system models—is nearly as important to the effectiveness of these efforts to inform decision-making as the technical competency and completeness of those models. However, the information visualization of data from complex system models is often treated second to the technical considerations. This paper introduces high-level guidelines for visual presentation of MBSE efforts. These insights are presented such that they conform to numerous system modeling languages/representation standards. The insights are drawn from best practices of Information Visualization as applied to aerospace-based applications. For example, the paper presents how modelers can take advantage of functionality in existing modeling notions and software tools that implement them, and also the importance of keeping in mind the final presentation media, presentation venues, and historically accepted viewpoint styles. The paper also presents a concept for how to move beyond traditionally static outputs; in turn, allowing users to dynamically manipulate the output views within the context of their real-time concerns to answer specific questions about the modeled system(s).
2013-03-19
[“Scott Davidoff”, “William Odom”, “John Zimmerman”, “Jodi Forlizzi”, “Anind K. Dey”, “Min Kyung Lee”]
Designing radically new technology systems that people will want to use is complex. Design teams must draw on knowledge related to people’s current values and desires to envision a preferred yet plausible future. However, the introduction of new technology can shape people’s values and practices, and what-we-know-now about them does not always translate to an effective guess of what the future could, or should, be. New products and systems typically exist outside of current understandings of technology and use paradigms; they often have few interaction and social conventions to guide the design process, making efforts to pursue them complex and risky. User Enactments (UEs) have been developed as a design approach that aids design teams in more successfully investigate radical alterations to technologies’ roles, forms, and behaviors in uncharted design spaces. In this paper, we reflect on our repeated use of UE over the past five years to unpack lessons learned and further specify how and when to use it. We conclude with a reflection on how UE can function as a boundary object and implications for future work.
2012-06-11
[“William Odom”, “John Zimmerman”, “Scott Davidoff”, “Jodi Forlizzi”, “Anind K. Dey”, “Min Kyung Lee”]
Designing radically new technology systems that people will want to use is complex. Design teams must draw on knowledge related to people’s current values and desires to envision a preferred yet plausible future. However, the introduction of new technology can shape people’s values and practices, and what-we-know-now about them does not always translate to an effective guess of what the future could, or should, be. New products and systems typically exist outside of current understandings of technology and use paradigms; they often have few interaction and social conventions to guide the design process, making efforts to pursue them complex and risky. User Enactments (UEs) have been developed as a design approach that aids design teams in more successfully investigate radical alterations to technologies’ roles, forms, and behaviors in uncharted design spaces. In this paper, we reflect on our repeated use of UE over the past five years to unpack lessons learned and further specify how and when to use it. We conclude with a reflection on how UE can function as a boundary object and implications for future work.
2012-06-11
Scott Davidoff
Researchers recently gathered at Tsinghua University in Beijing, China, to share their accomplishments at the 13th annual ACM Conference on Ubiquitous Computing.1 This multidis- ciplinary conference featured advances in sensing technologies and activity-recognition algorithms, novel applications, and results from user evaluations. Smartphones continued to provide a versatile research platform, facilitating application delivery, sensor data cap- ture, and computational social science on captured data.
2012-06-01
Scott Davidoff
By understanding how routines support people’s everyday activities, we can uncover new subjects for sensing and machine learning. This new data creates new ways for end-user applications to support daily life. I demonstrate the value of this approach using dual-income families. My studies of family logistics shows that family members sometimes need but do not have access to information about the plans and routines of other family members. Because family members do not document this information, they do not exist as resources family members can turn to when needed. With only the GPS on commercial mobile phones, we can use machine learning and data mining to automatically document family logistical routines, and present that information to families to help them feel more in control of their lives.
2011-11-01
Scott Davidoff,Nicolas Villar,Alex S. Taylor,Shahram Izadi
The complexities and costs of deploying Ubicomp applications seriously compromise our ability to evaluate such systems in the real world. To simplify Ubicomp deployment we introduce the robotic pseudopod (P.Pod), an actuator that acts on mechanical switches originally designed for human control only. P.Pods enable computational control of devices by hijacking their mechanical switches – a term we refer to as mechanical hijacking. P.Pods offer simple, low-cost, non-destructive computational access to installed hardware, enabling functional, real world Ubicomp deployments. In this paper, we illustrate how three P.Pod primitives, built with the Lego MindStorm NXT toolkit, can implement mechanical hijacking, facilitating real world Ubicomp deployments which otherwise require extensive changes to existing hardware or infrastructure. Lastly, we demonstrate the simplicity of P.Pods by observing two middle school classes build working smart home applications in 4 hours.
2011-09-17
Scott Davidoff,Brian D. Ziebart,John Zimmerman,Anind K. Dey
Part of being a parent is taking responsibility for arranging and supplying transportation of children between various events. Dual-income parents frequently develop routines to help manage transportation with a minimal amount of attention. On days when families deviate from their routines, effective logistics can often depend on knowledge of the routine location, availability and intentions of other family members. Since most families rarely document their routine activities, making that needed information unavailable, coordination breakdowns are much more likely to occur. To address this problem we demonstrate the feasibility of learning family routines using mobile phone GPS. We describe how we (1) detect pick-ups and drop- offs; (2) predict which parent will perform a future pick-up or drop-off; and (3) infer if a child will be left at an activity. We discuss how these routine models give digital calendars, reminder and location systems new capabilities to help prevent breakdowns, and improve family life.
2011-05-07
Scott Davidoff
By understanding how routines support people’s everyday activities, we can uncover new subjects for sensing and machine learning. This new data creates new ways for end-user applications to support daily life. I demonstrate the value of this approach using dual-income families.
My studies of family logistics shows that family members sometimes need but do not have access to information about the plans and routines of other family members. Because family members do not document this information, they do not exist as resources family members can turn to when needed.
With only the GPS on commercial mobile phones, we can use machine learning and data mining to automatically document family logistical routines, and present that information to families to help them feel more in control of their lives.
2011-05-01
Scott Davidoff
Even though the coordination of kids’ activities is largely successful, the modern dual income family still regularly experiences breakdowns in their practices. Families often rely on routines to help them coordinate when plans prove less effective. Routines, however, are rarely documented, challenging to express in detail, and frequently evolving, making them cumbersome to manually describe and so largely unavailable to computational systems as input. This work proposes that this disconnect can be overcome, and argues that unsupervised models of family routine can be learned using a single, lightweight sensor. This way, the successful but tacit knowledge of the routine might be captured and exploited by learning systems, providing a new kind of information for families and computational systems alike. A method is proposed to develop a Bayesian Network to reason about the state of family coordination. This model relies on learned routines of pickup and drop-off at kids’ activities.
2010-09-26
Scott Davidoff,John Zimmerman,Anind K. Dey
Researchers have detailed the importance of routines in how people live and work, while also cautioning system designers about the importance of people’s idiosyncratic behavior patterns and the challenges they would present to learning systems. We wish to take up their challenge, and offer a vision of how simple sensing technology could capture and model idiosyncratic routines, enabling applications to solve many real world problems.
To identify how a simple routine learner can demonstrate this in support of family coordination, we conducted six months of nightly interviews with six families, focusing on how they make and execute plans. Our data reveals that only about 40% of events unfold in a routine manner. When deviations do occur, family members often need but do not have access to accurate information about their routines. With about 90% of their content concerning deviations, not routines, families do not rely on calendars to support them during these moments. We discuss how coordination tools, like calendars and reminder systems, would improve coordination and reduce stress when augmented with routine information, and how commercial mobile phones can support the automatic creation of routine models.
2010-04-10
[“Min Kyung Lee”, “Scott Davidoff”, “John Zimmerman”, “Anind K. Dey”]
For many years technology researchers have promised a smart home that, through an awareness of people’s activities and intents, will provide the appropriate assistance to improve human experience. However, before people will accept intelligent technology into their homes and their lives, they must feel they have control over it (Norman 1994). To address this issue, social researchers have been conducting ethnographic research on families, looking for opportunities where technology can best provide assistance. At the same time, technology researchers studying “end user programming” have focused on how people can control devices in their homes. We observe an interesting disconnect between the two approaches–the ethnographic work reveals that families desire to “feel in control of their lives,” more than in control of their devices. Our work attempts to bridge the divide between these two research communities by exploring the role a smart home can play in the life of a dual-income family. If we first understand the roles a smart home can play, we can then more appropriately choose how to provide families with the control they desire, extending the control of devices to incorporate the control of their lives families say they need.
2008-12-03
Scott Davidoff,Min Kyung Lee,Anind K. Dey,John Zimmerman
While the user-centered design methods we bring from human-computer interaction to ubicomp help sketch ideas and refine prototypes, few tools or techniques help explore divergent design concepts, reflect on their merits, and come to a new understanding of design opportunities and ways to address them. We present Speed Dating, a design method for rapidly exploring application concepts and their interactions and contextual dimensions without requiring any technology implementation. Situated between sketching and prototyping, Speed Dating structures comparison of concepts, helping identify and understand contextual risk factors and develop approaches to address them. We illustrate how to use Speed Dating by applying it to our research on the smart home and dual-income families, and highlight our findings from using this method.
2007-09-16
Min Kyung Lee,Scott Davidoff,John Zimmerman,Anind K. Dey
Dual-income families experience stress as they attempt to manage the conflicting responsibilities of work, school, home, and enrichment activities. Opportunities exist for technology to provide support in managing their children’s activities, helping parents feel more in control of their lives. In this paper, we explore opportunities to support children’s activities. Based on our contextual fieldwork with dual-income families, we suggest a concept of the Smart Bag, which addresses two design opportunities: (i) a reminder system that helps people remember their schedules and what they need to take, and (ii) a reminder system that allows parents to engage in parenting.
2007-08-22
Scott Davidoff,Min Kyung Lee,Charles Yiu,John Zimmerman,Anind K. Dey
Seeking to be sensitive to users, smart home researchers have focused on the concept of control. They attempt to allow users to gain control over their lives by framing the problem as one of end-user programming. But families are not users as we typically conceive them, and a large body of ethnographic research shows how their activities and routines do not map well to programming tasks. End-user programming ultimately provides control of devices. But families want more control of their lives. In this paper, we explore this disconnect. Using grounded contextual fieldwork with dual-income families, we describe the control that families want, and suggest seven design principles that will help end-user programming systems deliver that control.
2006-09-17
Scott Davidoff,Min Kyung Lee,John Zimmerman,Anind K. Dey
Dual-income families experience stress as they attempt to manage the conflicting responsibilities of work, school, home, and enrichment activities. Opportunities exist for technology to provide support in managing their children’s activities, helping parents feel more in control of their lives. In this paper, we explore opportunities to support children’s activities. Based on our contextual fieldwork with dual-income families, we suggest a concept of the Smart Bag, which addresses two design opportunities: (i) a reminder system that helps people remember their schedules and what they need to take, and (ii) a reminder system that allows parents to engage in parenting.
2006-04-05
Scott Davidoff,Carson Bloomberg,Ian Anthony R. Li,Jennifer Mankoff,Susan R. Fussell
Substantial stumbling blocks confront computer-illiterate elders. We introduce a novel user interface technology to lower these start up costs: the book as user interface, or BUI. Book pages contain both step-by-step instructions and tangible controls, turning a complex interaction into a walk- up-and-use scenario. The system expands support past the technical artifact to a go-to relationship. ElderMail users designate an internet-savvy trusted friend or relative to help with complex tasks. In this paper, we conduct a preliminary evaluation of a BUI-based email system, and report our findings. While research has augmented paper artifacts to provide alternate access into the digital world, we find that elders use the BUI as a way to circumvent the digital world.
2005-04-02
[“Heidi Dangelmaier”, “Scott Davidoff”]
Abstract
2002-03-01
Heidi Dangelmaier,Scott Davidoff
Abstract
2001-08-15
Heidi Dangelmaier,Scott Davidoff
Design directly impacts business. Strong design excites and compels. Generic design condemns you to anonymity. Off- target design sends the wrong message. Cheap design is a turnoff. Bottom line: Design can make or break the sale.
2001-05-25
Heidi Dangelmaier,Scott Davidoff
Abstract
2001-04-25
Heidi Dangelmaier,Scott Davidoff
Abstract
2001-02-12