Modeling Data Irregularities and Structural Complexities in Data Envelopment Analysis
In a relatively short period of time, Data Envelopment Analysis (DEA) has grown into a powerful quantitative, analytical tool for measuring and evaluating performance. It has been successfully applied to a whole variety of problems in many different contexts worldwide. The analysis of an array of these problems has been resistant to other methodological approaches because of the multiple levels of complexity that must be considered. Several examples of multifaceted problems in which DEA analysis has been successfully used are: (1) maintenance activities of US Air Force bases in geographically dispersed locations, (2) policy force efficiencies in the United Kingdom, (3) branch bank performances in Canada, Cyprus, and other countries and (4) the efficiency of universities in performing their education and research functions in the U.S., England, and France. In addition to localized problems, DEA applications have been extended to performance evaluations of 'larger entities' such as cities, regions, and countries. These extensions have a wider scope than traditional analyses because they include "social" and "quality-of-life" dimensions which require the modeling of qualitative and quantitative data in order to analyze the layers of complexity for an evaluation of performance and to provide solution strategies.
DEA is computational at its core and this book will be one of several books that we will look to publish on the computational aspects of DEA. This book by Zhu and Cook will deal with the micro aspects of handling and modeling data issues in modeling DEA problems. DEA's use has grown with its capability of dealing with complex "service industry" and the "public service domain" types of problems that require modeling both qualitative and quantitative data. This will be a handbook treatment dealing with specific data problems including the following: (1) imprecise data, (2) inaccurate data, (3) missing data, (4) qualitative data, (5) outliers, (6) undesirable outputs, (7) quality data, (8) statistical analysis, (9) software and other data aspects of modeling complex DEA problems. In addition, the book will demonstrate how to visualize DEA results when the data is more than 3-dimensional, and how to identify efficiency units quickly and accurately.
There certainly is a need for such a book. We have had issues for a long time on how to deal with computation/data variety problems in the DEA literatureThe authors are well recognized in the areas and have done substantial research and consulting in the computational and data areas in DEA. I believe the book will be of high quality based upon their authority and experienceWade Cook and Joe Zhu are well known researchers in DEA. Their work spans the history of DEA as well as its range, from theory to practice. They are proposing a timely edited volume inviting "DEA experts on DEA computational and data issues to address important DEA implementation difficulties." The idea for the book is good and is in tune with the evolutionary stage of the topic