Editorial
1.The 55th session of the UN Statistical Commission: promoting inclusivity and protecting statistical systems from political interference
The annual session of the UN Statistical Commission (UNSC) is the place you want to be if you are a manager of an official statistics agency at a national, regional or international level. The Commission is the highest intergovernmental forum in the global statistical system where emerging and strategic statistical issues are discussed and new methods and standards are considered and endorsed. This editorial provides a summary account of the main important decisions taken by the latest session of the Commission held from 27 February to 1 March 2024 at UN Headquarters in New York.
The fifty-fifth session of the Commission has seen the in-person participation of representatives of 129 countries, 45 regional or international agencies and 8 INGOs. In total, over 600 experts attended the Commission and related side events, bringing back participation in this annual event almost to the levels touched before the COVID pandemic.
The recent session of the Statistical Commission has produced a series of pivotal decisions that are expected to have profound implications for the global statistical system. These decisions, spanning over a spectrum of topics from inclusivity of representation, to methodological enhancements in various statistical domains, to the establishment of working groups for the discussion and development of statistical standards in new areas, underscore the Commission’s firm commitment to advancing statistical methodologies and frameworks in an ever-evolving landscape.
But first and foremost, the fifty-fifth session of the Commission was the occasion to celebrate the 30th anniversary of the endorsement of the UN Fundamental Principles of Official Statistics (FPOS) by the Commission itself and the 10th anniversary of their approval at the highest political level by the UN General Assembly. In preparation for this event, UNSD has spearheaded a wide-ranging series of initiatives at global and regional levels in collaboration with regional organizations, expert groups, training institutions, and statistical associations. The primary objectives of these initiatives were the development of the Terms of Reference for the Independent Advisory Board on the FPOS (IAB) and the annotated outline of the revised FPOS Implementation Guidelines to be endorsed by the fifty-fifth session of the Commission. Despite this lengthy and comprehensive preparation process, the Commission in Decision 1 endorsed the proposal to prepare two sets of implementation guidelines tailored to different types of audiences, but – somehow unexpectedly – chose to put on hold the establishment of an Independent Advisory Board on the FPOS, until revised terms of reference, membership and criteria for the selection of experts were decided ‘through a transparent and inclusive consultation process’.
Before delving into the possible reasons that led to this controversial outcome and its implications, let us review in more detail the most important decisions taken during the 55th Commission’s session, aimed at ensuring the continuous relevance, accuracy, and credibility of official statistics worldwide. In doing so, I will follow the official report on the fifty-fifth session.11
A strong commitment to a more representative and inclusive Commission led to the approval of Decision 2 and of a draft resolution to be adopted by the Economic and Social Council, recognizing the need to enlarge the Commission’s membership by progressively increasing from 24 to 54 the number of official members by the fifty-ninth session in 2028. This decision poses an even greater responsibility on the heads of national statistical offices to effectively represent and promote the active participation of all relevant stakeholders at the country level, including the private sector, academia, and civil society, in discussions on issues related to data and statistics. It takes much more than a change in the ToR of the Commission to improve data governance at national and global levels.
Decision 3 highlighted the Commission’s enduring effort to keep refining data and indicators to monitor the 2030 Agenda for Sustainable Development, enabling methodological enhancements, improved documentation, and the establishment of clear criteria for the final comprehensive review of the global indicator framework that will be completed in 2025 on the basis of the deliberations of the Interagency and Expert Group on SDG indicators. Meanwhile, with Decision 4, the Commission endorsed a phased approach to aligning the Cape Town Global Action Plan with the Hangzhou Declaration, where the second phase will focus on innovative instruments to meet the financing and capacity development needs for statistics for the 2030 Agenda, including support for emerging data ecosystem developments.
Decision 5 supported the proposal that the regional and global hubs, recently created to enhance capacity in big data and data science, should expand their partnerships with national statistical offices, international organizations, academia, and the private sector, and welcomed the development of a playbook for integrating data science into the work of statistical offices.
Decision 6 underscored the Commission’s appreciation for the Praia Group’s methodological work on governance statistics, in particular on non-discrimination and equality and on participation in political and public affairs, and emphasized the importance of capacity development activities in this domain with greater involvement of the Regional Commissions.
In Decision 7, the Commission endorsed the recommendations for the update of the 2008 SNA, opening up to the endorsement next year of the 2025 SNA, which will broaden the scope of transactions, including the production boundary, and the accounting framework itself to address well-being and sustainability by changing the sequence of economic accounts (e.g. to better account for natural capital) or through the compilation of extended/thematic accounts (e.g. on unpaid household activities). The update of the 2008 SNA will be closely aligned with the update of the sixth edition of the Balance of Payments and International Investment Position Manual.
Environmental-economic accounting took centre stage in Decision 9, where the Commission endorsed the proposed revision of the SEEA Central Framework, requesting the Experts’ Committee to submit, at the following session, the list of issues and the road map for its update, and welcomed the establishment of global datasets for air emission accounts and energy accounts, encouraging countries to submit data and disseminate their accounts.
Decision 10 acknowledged the importance of streng-thening food security and nutrition statistics, welcomed the Committee on World Food Security policy recommendations on the collection and use of food security and nutrition data, approved the inclusion of a new agenda item on food security and nutrition statistics into the multiannual work programme of the Commission, recommended considering food security and nutrition as a standalone statistical data domain in the Classification of Statistical Activities, and endorsed the guidelines on processing food data from household consumption and expenditure surveys.
Decision 12 emphasised the importance of time-use surveys to compile gender statistics, endorsed the revised UN Guide for Producing Time-use statistics, and acknowledged progress made in mainstreaming gender perspectives into trade and business statistics as well as climate change statistics.
Population and housing census initiatives were addressed in Decision 13, with the Commission requesting the UN Statistics Division to prepare a draft resolution to launch the 2030 World Population and Housing Census Programme at the 56th session of the Statistical Commission in 2025, and, in that perspective, to finalise the fourth revision of the Principles and Recommendations guidelines and to prepare a comprehensive report highlighting achievements and challenges faced by countries in the implementation of the current 2020 round of censuses.
In Decision 15 the Commission focused on tourism statistics, endorsing the statistical framework for measuring the sustainability of tourism and requesting the preparation of dedicated guidelines and an implementation programme to guide capacity development efforts.
Decision 16 emphasised the integration of business and trade statistics, endorsing the second edition of the Handbook on Measuring Digital Trade; commending progress made on the global initiative on unique identifiers for the improvement of the statistical business register and other users of register data, and welcoming the finalisation of the manual on the maturity model for statistical business registers.
Decision 17 addressed data and indicators for the 2030 Agenda, emphasising the importance of household surveys and citizen data in bridging data gaps. In this respect, the Commission supported the planned revision of the UN handbooks on household surveys and welcomed the draft Copenhagen Framework on Citizen Data, supporting its continuous testing and refinement.
Finally, Decision 19 focused on international statistical classifications, endorsing the updates to the product and industrial classifications and supporting more frequent revisions to accurately reflect economic realities.
In summary, by endorsing innovative methodologies, fostering partnerships, and advocating for the integration of diverse perspectives and stakeholders in an enlarged data ecosystem, also this year the Commission has reaffirmed its pivotal role in shaping the future endeavours of official statistics worldwide.
Now, let us examine in more detail the intricacies behind Decision 1. The strong disagreement expressed during the session on the formulation of the ToR of the IAB on FPOS and its proposed membership reflects the opposition of countries to the establishment of a perceived watchdog that would have interfered in their sovereignty and national prerogatives, particularly as it would have involved external oversight or intervention in domestic statistical affairs. A pre-determined list of IAB members, invited by UNSD, without the possibility of suggesting alternative names or making decisions on the basis of transparent criteria, also contributed to triggering an energic negative reaction to the UNSD proposal.
Countries generally agreed with the establishment of the IAB on FPOS, provided that its responsibilities were limited to an advocacy and guidance role, helping statistical institutions at national and global levels implement the FPOS and promoting adherence to them. Moreover, countries expressed their desire to contribute to the selection of board’s members, highlighting the need to choose members characterised by diverse professional backgrounds, and with a proven record of independence and accountability, ensuring that the IAB would operate without bias and would uphold the principle of integrity of official statistics.
However, removing from the ToR of the Independent Advisory Board the monitoring function of non-implementation and non-compliance with the FPOS, leaves a huge gap in its assurance framework. In this way, if clear episodes of political interference in the production of official statistics were to happen, there would be no international mechanism that has the possibility and authority to intervene, denouncing the episodes and calling the attention of the global statistical community and their institutions to these infringements of the FPOS. Examples of political interference in official statistics have often occurred in recent history and across various countries, highlighting the need for a global safety mechanism to advocate for the protection of the integrity of statistical institutions and their statistical outputs. And these episodes are only the tip of the iceberg. Various forms of political pressure, milder or stronger, that are not generally known outside of the institution (or even to the restricted circle of people in senior’s-level positions), are part of the everyday experience of whoever works in national or international statistical institutions.
Political interference and, more generally, any form of political pressure in official statistics pose significant threats to the integrity, credibility, and utility of statistical data. When politicians exert undue influence over statistical agencies, the independence of these institutions is compromised, as is the impartiality and objectivity of their work. Political interference can lead to the manipulation of statistical data to suit specific agendas, to present a rosier picture of the government’s performance, or to downplay inconvenient truths. This distortion of data, in turn, undermines the credibility of official statistics and hampers the ability of policymakers, the private sector, and the public to make informed decisions. When statistical agencies are perceived as subject to political interests, public trust in their outputs, which is essential for their legitimacy and effectiveness, is severely affected. Scepticism regarding the accuracy and impartiality of official statistics can lead to public disillusionment and cynicism, undermining their utility as a tool for governance and policy formulation. Moreover, political interference in statistical reporting can weaken the democratic process and undermine democratic institutions by concealing or distorting information.
The system currently implemented for monitoring and enforcement of the UN Fundamental Principles at global level, based on the survey carried out every ten years by UNSD, whose results are then presented to the UNSC, is inherently weak, episodic, and bound to provide a rosier perspective on their actual application. In fact, the survey is based on a self-assessment questionnaire, which relies completely on data provided by national statistical offices and political authorities. Without an independent external validation of this information, however, this monitoring system is affected by some critical limitations, as it may provide incomplete or biassed evidence and may overstate the degree of implementation of the FPOS.
The additional tools provided, which consist of implementation guidelines, advocacy initiatives, best practice compilation, training and technical assistance, are certainly helpful, but being ultimately focused on self-monitoring and self-assessment, they are inherently inadequate and insufficient for effectively improving the implementation of the FPOS, as they do not offer sufficient deterrence against issues such as political pressure and interference with statistical production and quality.
Self-assessment practices should be supplemented by external examination, verification, and follow-up monitoring conducted by an independent party to ensure a higher level of quality from a global perspective, given that official statistics are a global public good serving the needs of the entire global community, not only those of national policy-makers or national stakeholders.22
The very fact that all countries, without exception, have perceived the establishment of such a body with monitoring prerogatives on the production of official statistics as unwanted interference in their national affairs, clearly indicates the necessity for its existence. The identification of the prerogatives of a national statistical office with national interests does not recognise that national statistical institutions all too often are the weaker link in any institutional set-up, even in the more democratic ones. Moreover, it does not take into account that the production of official statistics is not solely an internal national matter.
The proposal for creating the IAB would aim to protect any statistical institution from undue external political pressure and breaches of statistical principles. Moreover, this body would help ensure that statistical outputs are the result of a transparent, methodologically sound, and impartial process aimed at producing a global public good. In essence, this body will seek to ensure that statistical practices adhere to universal standards, promoting trust, accuracy, and reliability in data collection and analysis on a global scale.
Safeguarding the independence of statistical agencies is essential for upholding the principles of transparency, accountability, and democratic governance. Implementing robust legal frameworks, ensuring institutional autonomy, upholding professional standards, and adhering to international standards can contribute to preserving the independence and integrity of official statistics. These safeguards, however, may not be sufficient. For these reasons, establishing an independent advisory body to the UN Statistical Commission has the potential to mitigate the risks of political interference and strengthen the integrity of official statistics. Certainly, careful consideration must be given to its mandate, membership, and operational mechanisms to ensure its effectiveness and legitimacy. Collaboration with member states will be essential in navigating the complexities and challenges of implementing such a body successfully.
2.The contents of this issue
2.1Interview with Stefan Schweinfest
This issue of the SJIAOS starts with an insightful interview with Stefan Schweinfest, director of the UN Statistics Division, carried out by Pieter Everaers. In this interview, Stefan Schweinfest talks about his career and professional milestones, discussing the evolving responsibilities of the UN Statistics Division in response to the increased complexity of the data ecosystem landscape. He also elaborates on the contribution of the SDGs to the development of statistics, the emergence of the Geographic Information System community, and the important role of the UN Fundamental Principles for Official Statistics for the global statistical system.
2.2Special section: The future of agricultural statistics
This section of the Journal, organized by Linda J. Young as Guest Editor, comprises twelve papers selected from those presented at the ninth International Conference on Agricultural Statistics (ICAS IX), which was held at the World Bank headquarters in Washington, DC, on May 17–19, 2023. The International Conference on Agricultural Statistics is the most important conference dealing with this topic and is organized every three years under the aegis of the Committee on Agricultural Statistics of the ISI. ICAS brings together experts, official statisticians, and data users from around the world to share research accomplishments and discuss cutting edge methodologies and best practices on data collection, processing, and analysis for agricultural statistics.
The selected papers explore various themes and advancements shaping the future of agricultural statistics, addressing key challenges and opportunities on the horizon. Each paper delves into a distinct aspect of this evolving landscape, shedding light on the latest innovations as well as on the changing needs and prospects for agricultural statistics. From discussions on modernization efforts within National Statistical Offices to the incorporation of machine learning algorithms in agricultural surveys, and from the analysis of household consumption data for food security to the assessment of climate change impacts on water resources, these papers cover a wide array of topics.
This section opens with ‘The Production of Official Agricultural Statistics in 2040: What Does the Future Hold?’ in which five statistical leaders with diverse perspectives, Linda Young, Gero Carletto, Graciela Márquez, Dominik Rozkrut, and Spiro Stefanou, envision the not-too-distant future of official agricultural statistics in 2040. Examining ongoing efforts to modernize the production of official agricultural statistics within National Statistical Offices, they discuss how these offices are embracing several technological advancements, such as migrating data and processes to the cloud, utilizing web-based surveys, and integrating diverse data sources like administrative and remotely sensed data. One key aspect highlighted is the transition from traditional data collection methods to digital formats, emphasizing the increasing incorporation of non-survey data into surveys for more accurate estimates. In this regard, the importance of maintaining the quality and comparability of statistics over time through established standards and validation processes is underscored. Moreover, the paper touches upon the anticipated role of modelling in the future statistical landscape and the optimization of statistical dissemination for user accessibility, largely facilitated by cloud-based platforms. While also acknowledging the challenges ahead, particularly in securing dependable funding and balancing current demands with the adoption of new processes, the paper concludes with an optimistic outlook, inspired by the collective vision of leaders in the field, guiding the trajectory of official agricultural statistics towards 2040.
The second paper is ‘Country Statistical Capacity: A Critical Review of Measurement Issues and Its Role for Development’ by Hai-Anh Dang, Dean Jolliffe, Umar Serajuddin, and Brian Stacy (all from the World Bank). After discussing the lack of academic studies on statistical capacity, its measurement challenges, and its role in development, the paper focuses on the critical role that tools like the Statistical Performance Indicators and Index (SPI) can have in measuring and promoting the enhancement of country statistical capacity. The Statistical Performance Indicators and Index, developed by the World Bank, is compared with measures like the Statistical Capacity Index (SCI), the ODIN index, and others. The SPI stands out for its clear conceptual and mathematical foundations, providing a comprehensive approach to various aspects of statistical capacity across countries. For these properties, and despite being relatively new, the SPI has been selected to monitor SDG Target 17.18 and utilized in studies and influential reports. Preliminary analysis shows a positive correlation between the SPI and GDP per capita, governance effectiveness, human capital, and indicators of poverty and inequality. The paper also discusses potential future uses and developments for the SPI, beyond its current role.
In ‘Using Machine Learning Algorithms to Identify Farms on the 2022 Census of Agriculture’ Gavin Corral, Denise Abreu, Linda J. Young (all from USDA-NASS), and Katherine Vande Pol (Smithfield Foods Inc.) demonstrate how machine learning algorithms are revolutionizing agricultural surveys, improving the accuracy of data collection and analysis. The agricultural industry in the US is in transition from predominantly mid-sized operations to fewer large operations. As a result, the participation of large producers in surveys is needed for accurate agricultural estimates. The resulting burden for these producers has contributed to dwindling response rates. To reduce burden and mitigate the effects of lower response rates, NASS is exploring the potential to complete some or all of a nonresponding record through imputation. In this paper, the focus is on a large set of records associated with farms for which their farm status has not been determined. These farms tend to have higher proportions of producers from under-represented groups compared to other records. Determining the probability that each of these records is a farm is an important step in the imputation process. The farms with undetermined status that responded to the 2017 Census were used to develop models to predict farm status using multiple data sources. Evaluated models included bootstrap random forest (RF), logistic regression (LR), neural network (NN), and support vector machine (SVM). Of the models tested, RF had the best outcomes according to most metrics. Key variables influencing prediction included the year the record was added, primary operator sex, source of the record, and operation state.
The fourth paper is ‘Crop Sequence Boundaries: Delineated Fields Using Remotely Sensed Crop Rotations’ by Kevin A. Hunt, Jonathan Abernethy (both from USDA-NASS), Peter C. Beeson, Maria Bowman, Steven Wallander, and Ryan Williams (all from USDA-ERS). Gridded landcover datasets like the NASS Cropland Data Layer (CDL) offer valuable resources for analysing cropland management statistically. However, due to spatial correlation within homogeneous management units (fields), their analysis may lead to ecological fallacies and overlook crucial drivers of agricultural outcomes observable only at the management unit level. To address this issue, a research project developed Crop Sequence Boundaries (CSBs), algorithm-based geospatial polygons derived from historic CDLs and road/rail networks, to capture areas with common cropping sequences. CSBs cover the entire contiguous United States and provide accurate and repeatable delineation of fields over time. The use of cloud computing, as well as Google Earth Engine and ArcGIS Pro, facilitated the creation of CSBs, ensuring efficiency and scalability. As a result, CSBs now serve as a new proxy for rotation-based field mapping, enhancing large-scale crop mapping applications with vector-based data.
In ‘Using Paradata to Assess Respondent Burden, Survey Costs, and Interviewer Effects in Multi-Topic Household Surveys’ Ardina Hasanbasri, Talip Kilic, Gayatri Koolwal, and Heather Moylan (all from the WB) evaluate the significance of paradata in assessing respondent burden and interviewer effects in household surveys, optimizing data collection processes. Given their importance for tracking progress towards global development goals, particularly in low- and middle-income countries, the complexity of household surveys has progressively increased over time. At the same time, the recent transition to computer-assisted interviewing has generated paradata, offering insights into respondent burden, interview costs, and interviewer effects. This study analyses paradata from nationally representative household surveys in Cambodia, Ethiopia, and Tanzania conducted between 2018 and 2020 using the Survey Solutions CAPI platform. The data collected allowed the authors to identify the most time-consuming modules and their costs, as well as the interviewer’s effects on their duration, which are estimated to be greater than in high-income contexts. Identifying modules with high interview effects can guide the organization and implementation of future surveys by allocating additional training and supervision resources. The paper further provides guidelines on the use of paradata for module-level analysis to aid in operational survey decisions, such as using interview length, to estimate unit cost for budgeting purposes as well as for understanding interviewer effects using a multi-level model. The findings, disaggregated by module, point to where additional interviewer training, fieldwork supervision, and data quality monitoring may be needed in future surveys.
In ‘More Efficient Use of Household Consumption and Expenditure Surveys (HCES) to Inform Food Security’ Ellen Cathrine Kiosterud, Astrid Mathiassen (both from Statistics Norway), and Owen Siyoto (COMESA) address the challenge of effectively utilizing these surveys for both poverty and food security analysis. In fact, the data preparation processes for these distinct purposes often diverge, leading to inefficiencies and duplication of efforts. To address this issue, Statistics Norway has implemented a project in the COMESA region that involved the development of guidelines for preparing HCES food data for food security analysis, building NSO capacity, and promoting regional cooperation. The recent endorsement of the guidelines by the latest session of the UN Statistical Commission should ensure their widespread adoption, standardizing processes for the production of food security statistics and their results, and making HCES food data readily available for various analyses, including regional food context assessments.
In ‘Computing levels of nutrient inadequacy from household consumption and expenditure surveys: a case study’ Cristina Álvarez-Sánchez (UNICEF), Ana Moltedo, Nathalie Troubat, and Carlo Cafiero (all from FAO) present a methodological approach to estimate the prevalence of nutrient inadequacy (PoNI) for eight micronutrients using household consumption and expenditure surveys (HCES) data. In the absence of individual quantitative dietary intake surveys representative of the entire population, the study uses HCES data as an alternative source for PONI estimation, particularly in low- and middle-income countries. The study proposes a methodological approach to adjust the excess variability present in HCES data due to random measurement errors, accurately estimate the distribution of usual consumption, and use the adjusted data to estimate the prevalence of nutrient inadequacy. Using data from the 2015 Bangladesh Integrated Household Survey, the study compares estimates of nutrient inadequacy derived from adjusted household-level consumption data with those obtained from individual-level consumption data collected through 24-hour recalls. Results indicate that PoNI estimates based on 7-day recall are lower than those calculated from 24-hour recall data, due to the larger average intake estimates from 7-day recall data. After controlling for differences in average intake estimates, the PoNI values from the two datasets are remarkably close. This highlights the potential use of HCES data, conducted according to agreed-upon international standards and properly adjusted for excess variability, for estimating the level of between-subject variability in usual nutrient intake in a population.
In Who are Small-Scale Food Producers in Italy? Comparisons among Different Approaches’ Roberto Gismondi (ISTAT) compares different methodologies to identify small-scale food producers (SSFs) in Italy. This group of farmers is vital for promoting sustainable and productive agriculture, as recognized by the SDG target of doubling the productivity and income of small-scale food producers by 2030 (SDG target 2.3). Key challenges in monitoring this target are the definition of SSFs, which can be identified on the basis of physical or economic criteria, and the computation of net incomes for all active farmers. Two approaches were compared using the standard output indicator to define SSFs. The first approach utilized annual agricultural surveys that lack data on very small farms, while the second approach utilized census data that lacks income information for all farmers. Both approaches showed significant income inequalities between SSFs and the rest of the farmers, with this gap widening over time. The paper showed that the FAO methodology aimed at identifying SSFPs may be replaced by an alternative and simpler methodology, which integrates both approaches and adopts a broader set of indicators (access to resources, technology, and markets) to identify SSFs. This new methodology can be implemented annually, as well as at the regional level. While the study recognizes that these results are valid in the Italian context, it also highlights that nowadays yearly data on surfaces, livestock, and the standard output of farmers are available in many countries.
The ninth paper in this section, ‘FAOSTAT Food Value Chain Domain Implementation: Input Output Modelling and Analytical Applications’, is by Silvia Cerilli, Michele Vollaro, Veronica Boero, Olivier Lavagne d’Ortigue (all from FAO), and Yi Jing (Cornell University). The authors introduce the FAOSTAT Food Value Chain Domain and show its potential to inform policy decisions by providing comprehensive data on food systems. This domain, in particular, provides insights into the distribution of domestic food expenditures across various industries and primary factors within the food value chain, including agriculture, food processing, wholesale and retail trade, accommodation, and food services. Utilizing the Global Food Dollar methodology, FAO applied a Leontief decomposition approach to derive input-output tables that allow for the analysis of the relative contributions of different sectors to food systems. In addition, the study proposes a conversion methodology that allows the extension of this analysis to countries with limited data availability, leveraging supply-use tables to impute input-output tables. This expansion enhances the potential coverage and time frame of the analysis, particularly for African, Asian, and Latin American countries. The paper describes these methodologies, emphasizing their adherence to international statistical standards and their potential to support food policies at international, regional, and national levels in alignment with the goals and targets of the 2030 Agenda.
In ‘Identifying Spatially-Differentiated Pathways for Rural Transformation’ Joachim Vandercasteelen, Namash Nazar (both from the World Bank), Yahya Bajwa, and Willem Janssen propose customized pathways for rural transformation, tailored to specific contexts, based on the analysis of comparative advantages and market access. The study, focusing on Pakistan, proposes a framework based on agroecological potential and market access to classify rural areas into similar categories of comparative advantage, livelihood strategies, and the role of agriculture in transformation. Using geospatial data on vegetation greenness and market access, the paper empirically classifies rural districts in Pakistan and characterizes them using secondary data. It develops an urban gravity model to assess demand from agricultural markets and travel time to reach markets. Finally, the paper suggests tailored transformation pathways for each category, emphasizing market-oriented approaches for areas with better market access and community-driven development for areas with limited potential or access.
The eleventh paper, ‘Food Price Inflation Nowcasting and Monitoring’ is by Luís Silva e Silva, Christian A. Mongeau Ospina, and Carola Fabi (all from FAO). The sharp rise in food prices, due first to the outbreak of the global COVID-19 pandemic and subsequently to the Russia-Ukraine war, highlighted the critical need for timely information on market prices to address the significant threat of poverty and hunger, particularly for the most impoverished households in developing nations, who allocate a large portion of their income to food expenses. To address this challenge, FAO developed two complementary tools: a nowcasting model for monthly food inflation and a daily price monitor. These tools were designed to provide timely data insights crucial for policymaking during emergencies. Data for both tools are sourced from Numbeo, a crowdsourced database that collects daily prices for various food products. The Daily Food Price Monitoring tool identifies abnormal price fluctuations, while the nowcasting model estimates consumer price indices using Numbeo’s data on food prices and other relevant variables, like oil prices and exchange rates. Additionally, sentiment analysis based on a comprehensive database of tweets is incorporated to gauge public sentiment. The outputs from these tools are integrated into an early warning system, accessible through a user-friendly dashboard on the FAO website, allowing real-time monitoring of food price trends. The paper provides a detailed description of the methods used and of case studies from Latin American and Sub-Saharan African countries, showcasing the effectiveness of these tools in detecting early warning signals of price crises.
In ‘Climate Change Scenario Analysis in Spree Catchment, Germany, using Statistically Downscaled ERA5-Land Climate Reanalysis Data’ Luenell Chris M. Buela (University of the Philippines) uses the Statistical Down Scaling Model (SDSM) in order to address the existing limitations of using global climate models. In particular, the SDSM is used to downscale ERA5-Land reanalysis data for spree catchment and obtain global climate model outputs at a smaller resolution, with the aim of generating climate change scenarios. These scenario projections are valuable for assessing water management strategies in agriculture and hydrology under changing climate conditions. Linear scaling was applied to mitigate biases in global climate model data for precipitation and temperature. Climate change scenarios were then produced using SDSM, considering three emission scenarios. Results suggest increased precipitation under higher emission scenarios, with the summer and autumn seasons projected to receive up to 50 mm more rainfall by 2100, accompanied by a temperature rise of up to 1∘C. The study underscores the importance of statistical downscaling and scenario generation in understanding climate change’s potential impacts on water resources, offering insights for adapting water resource management to changing climatic conditions.
2.3Surveys in times of COVID-19 and in difficult settings
The first paper in this section, ‘Experiences with Mixed-Mode Surveys in Times of COVID-19 at Statistics Netherlands’ by Kees van Berkel, Jan van den Brakel, Daniëlle Groffen, and Joep Burger (all from Statistics Netherlands), examines the impact of the changes in the survey data collection methods adopted by Statistics Netherlands in response to the disruption of face-to-face interviews caused by the COVID pandemic. It details the organization’s remedial measures undertaken to mitigate respondent attrition and response bias in the transition from face-to-face to web and telephone interviews. In order to address challenges such as lower response rates and biases, inherent to mixed-mode designs, the organization also explored alternative strategies like computer-assisted video interviewing (CAVI) that have the potential to enhance data quality and response rates. Looking ahead, the paper advocates for tailored questionnaires for mixed-mode surveys, which minimize differences in responses across data collection modes, and innovative inference methods to correct mode-dependent biases effectively.
The second paper in this section, ‘A Simulation Study of Sampling in Difficult Settings: Statistical Superiority of a Little-Used Method’ by Harry Shannon, Patrick D. Emond, Benjamin M. Bolker, and Román Viveros-Aguilera (all from McMaster University, Hamilton, Ontario), is a simulation study aimed at addressing the challenges of sampling in difficult settings, particularly when little is known about the population being surveyed. Through simulations involving 50 virtual populations with different characteristics, such as disease prevalence and variability in population density across towns, the researchers evaluated the effectiveness of these sampling methods, including the impact of clustering on survey precision and their trade-offs in terms of cost, time, and accuracy. Ten sampling methods were evaluated on the basis of their accuracy in estimating the prevalence of disease and the relative risk given an exposure. The findings revealed that the grid method generally outperformed the other methods across different circumstances, showing less susceptibility to clustering effects due to its sampling over a wider area. While the study provides valuable insights into sampling methodologies, it acknowledges limitations such as the simulation-based approach and the need for technical expertise in implementing certain sampling methods.
2.4Time series analysis
This section comprises five papers that delve into various aspects of time series analysis and its applications in economic research and forecasting. The focus shifts to exploring innovative methodologies and data sources to enhance forecasting accuracy and gain fresh insights into economic dynamics.
The first paper, ‘Outlier Identification and Adjustment for Time Series’ by Markus Fröhlich (Statistics Austria), delves into outlier identification and adjustment for time series data, proposing a novel approach based on support vector regressions (SVR) to detect additive outliers, particularly in short time series. Through testing on European data, the study finds promising results for both short and long time series, albeit with a higher run-time compared to benchmark methods. Despite challenges, such as longer run-time and lower outlier identification rates compared to benchmark methods, the SVR method shows potential for further optimization and application in enterprise-level data analysis.
The second paper is ‘Predicting Macroeconomic Indicators from Online Activity Data: A Review’ by Eduardo Andre Costa (University of Porto) and Maria Eduarda Silva (INESC-TEC, LIAAD). The paper reviews the comparative advantage of using online activity data, such as Google Trends, Twitter (now rebranded), and mobile device data, to predict key macroeconomic indicators, highlighting the benefits in terms of timeliness and cost-effectiveness compared to traditional survey data. In particular, the paper carries out a systematic review of existing literature on predicting macroeconomic indicators, examining the predictive accuracy of information from such data sources while also discussing their limitations and challenges. The review concludes that online activity data can be a valuable source of information for predicting macroeconomic indicators, even if further research is needed on the use of social media and mobile-generated data, particularly for non-developed or developing economies.
The third paper, ‘Optimising Port Arrival Statistics: Enhancing Timeliness through Automatic Identification System (AIS) Data’ by Nele van der Wielen, Justin McGurk, and Labhaoise Barrett (all from CSO, Ireland), explores the use of Automatic Identification System (AIS) data from ships to enhance official port visit statistics. It introduces the "Stationary Marine Broadcast Method" (SMBM) for estimating port visits and discusses the challenges and opportunities of its integration with official data. Significant benefits of the AIS data source are the adoption of a global standard, its timeliness, and its granularity. However, AIS-based estimates are structurally higher than traditional port statistics, possibly indicating that the current official statistics are underestimating the number of port calls or that there may be an element of double counting included in AIS-based estimates. Even if, for either reason, AIS won’t immediately replace traditional statistics, it offers promise for improved statistics in the future if integrated carefully, provided that continued development of experimental statistics alongside official data is carried out to explore AIS’s potential benefits globally.
The paper ‘The Interdependence and Cointegration of Stock Markets: Evidence from Japan, India, and the USA’ by John Pradeep Kumar (REVA University, Karnakata) and N. Mukund Sharma (B.N.M. Institute of Technology, Bengaluru) investigates the interdependence and cointegration of stock markets in Japan, India, and the USA, exploring their dynamics and long-term relationships. Using monthly data from April 2012 to March 2022, the study analyses the NIKKEI (Japan), BSE SENSEX (India), and NASDAQ (USA) indices. The Granger causality test shows that while the NASDAQ index predicted the SENSEX index with high precision, the NIKKEI index did not. Despite evidence that short-term relations exist, no long-term coupling among indices is found, enabling international investors to diversify their portfolios. Limitations of this analysis include the choice of the time period and econometric method, paving the way for future research to explore broader financial instruments and global markets from a comprehensive perspective.
Lastly, the section includes ‘An Investigation into the Convergence of Economic Growth Among Indian States and Path Ahead’ by Jitendra Kumar Sinha. The study investigates convergence in economic growth across Indian states in the period 1991–2022, using both the augmented Solow model and the extended Solow growth regression model. The results show evidence of conditional convergence across the region with the augmented Solow model, once controlling for investment, human capital, and population growth. Findings from applying the extended Solow growth regression model show even stronger evidence of conditional convergence throughout the study period when controlling for government final consumption and life expectancy, indicating the robustness of the results. Consequently, economically disadvantaged states in India have experienced more rapid economic growth in the period under study compared to affluent ones. Physical and human capital, along with population growth, are found to positively influence economic growth. Policy recommendations follow from the results obtained.
3.SJIAOS discussion platform: “Good Data are Used Data”
The 20th discussion titled “Good Data are Used Data” reflects on the importance of data utilization alongside production quality, as emphasized by Stefan Schweinfest in his interview that opens this issue of the Journal. It explores the concept of ‘good data’ as those not only methodologically sound but also actively utilized, including by civil society and the private sector. To operationalize this concept, the discussion poses questions on measuring data usage, such as identifying potential user groups and assessing the magnitude or impact of data use. The discussion will be launched in
mid-June 2024 on the SJIAOS discussion platform at https://officialstatistics.com/discussion-platform.
Pietro Gennari
Editor-in-Chief
May 2024
Statistical Journal of the IAOS
E-mail: gennari.sjiaos@gmail.com
Notes
1 United Nations, Statistical Commission, Report on the fifty-fifth session (27 February–1 March 2024), Economic and Social Council Official Records, 2024 Supplement No. 4 – E/2024/24-E/CN.3/2024/36 available at: https://unstats.un.org/UNSDWebsite/ statcom/session_55/documents/2024-36-FinalReport-E.pdf.
2 See Andreas V. Georgiou (2017), ‘Towards a global system of monitoring the implementation of UN fundamental principles in national official statistics’, SJIAOS vol 33, n 2.