7 Master Data Management Use Cases in 2024

reference data management case study

Figure 1. Worldwide interest in master data management since 2004. 1

Master data management (MDM) is the process of collecting, storing, organizing, and maintaining a company’s critical data. This data can include information about customers, products, suppliers, financial data, and compliance data. Master data management, in particular, can be applied to data governance . Unfortunately, interest in master data management has remained consistent in comparison to data governance, which has been steadily increasing since 2016.

Master data management capabilities are essential for businesses that want to make informed decisions based on accurate and complete data. In this article, we explore the seven use cases for master data management and their benefits.

1. Data governance

Data governance is the process of managing data accuracy, availability, usability, integrity, and security. It can be an essential component of effective master data management because it can ensure that data is of high quality and properly secured.

Since 2004, interest in data governance has been increasing, with high search volume in almost every state in the U.S. Data governance is the ability to improve data quality, which is critical to effectively managing master data. With accurate, complete, and consistent master data records, organizations can create a single, trusted source of data systems. It can be used across the organization to support various business processes, such as:

  • Sales: Accurate customer data can assist sales teams in identifying potential customers and tailoring their approach. MDM can provide a single, trusted master data record of customers that can be used to inform sales decisions.
  • Marketing: Product data that is complete and accurate can assist marketing teams in developing targeted marketing campaigns that resonate with their target audience and drive engagement and sales. MDM can provide a centralized repository for product data to support marketing initiatives.
  • Finance: Consistent financial data can assist finance teams in tracking revenue and expenses, developing budgets, and making sound financial decisions. MDM can provide a centralized, trusted source of financial data for use in making financial decisions.

Data security

Another benefit of master data management is that it can improve data security , reducing the risk of data breaches and protecting the organization’s reputation. By creating accurate master data, MDM can help reduce the risk of data breaches and protect the organization’s reputation. 

2. Customer data management

In 2020, the global interest in master data management and customer data management peaked. Customer data management (CDM) is a subset of MDM that focuses on the management of customer-related data within a company. The process of collecting, organizing, and maintaining customer data is referred to as customer data management. 

Effective customer data management enables businesses to keep accurate and complete customer information on hand, which can then be used to improve customer service and gain insights into customer behavior . 

As a result, the CRM benefits include:

  • Improved customer service through personalized interactions based on accurate and complete customer data.
  • Gaining insights into customer behavior by analyzing their data, which can be used to develop new products and services and improve customer loyalty.

3. Financial data management

Financial data management is a critical component of master data management that focuses on managing financial-related data within an organization. Financial data management refers to the process of collecting, organizing, and maintaining financial data. Financial data can include information related to transactional data, revenue, expenses, assets, liabilities, and other financial metrics that are essential for decision-making and financial reporting.

Financial data management strategy can be critical for ensuring that an organization’s financial data is accurate, complete, and consistent across all systems and applications. This is significant for several reasons. To begin, precise financial data is required for financial reporting , regulatory compliance , and tax reporting . Second, financial data is an important input for business decision-making processes like budgeting, forecasting, and strategic planning.

One of the primary advantages of incorporating financial data management into an MDM strategy is that it allows organizations to gain a comprehensive view of their financial operations. Organizations can gain valuable insights into their financial performance and identify areas for improvement by integrating financial data with other master data domains. Organizations, for example, can use financial data to determine which products or services are the most profitable, which regions generate the most revenue, and which cost centers drive expenses.

4. Supplier data management

One of the lesser-known aspects of MDM is supplier data management. In the last five years, supplier data management has seen consistent interest from India, the United States, and Canada. The process of collecting, organizing, and maintaining supplier data is referred to as supplier data management. This information may include details about:  

  • Supplier contracts
  • Performance
  • Compliance 

Reduced supplier risks

Effective supplier data management is critical for companies that want to ensure that their suppliers meet quality and compliance standards. Knowing what is happening in your supply chain can prevent snowball effects . 

One of the benefits of supplier data management is the ability to reduce supplier risks . By maintaining accurate and complete supplier data, businesses can identify potential supplier risks and take proactive measures to mitigate them. This can help businesses reduce the risk of supply chain disruptions and improve the overall quality of their products and services.

Supplier performance

Another benefit of supplier data management is the ability to improve supplier performance . By analyzing supplier data, businesses can identify areas for improvement and work with their suppliers to improve performance . This can help businesses build stronger relationships with their suppliers and improve the overall quality of their products and services.

5. Compliance data management

Since the 2020s, there has been an increase in interest in compliance management . The process of collecting, organizing, and maintaining compliance data is referred to as compliance data management. This information may include regulatory requirements , internal policies and procedures , and audits . 

Reducing compliance risks is one of the advantages of compliance data management. Businesses can recognize possible compliance issues and take preventative action to mitigate them by storing accurate and complete compliance data and reduce the risk of storing incorrect data. By doing this, businesses can lessen the chance of non-compliance fines and reputational harm .

The capacity to enhance compliance reporting is a further advantage of compliance data management. Businesses can produce more accurate compliance reports that can be utilized to make decisions by preserving accurate and full compliance data. By doing this, organizations can ensure that they are fulfilling their duties and enhance their compliance performance.

6. Product information management

Product data management (PDM) is an important component of master data management that focuses on managing product-related data within an organization. Product data management refers to the process of collecting, organizing, and maintaining product data. Product data can include information such as product descriptions , specifications, prices, inventory levels , and other relevant information that is essential for managing an organization’s product catalog and sales operations .

Good product data management ensures that an organization’s product information is accurate, complete, and consistent across all systems and applications. MDM includes product data as a master data domain, together with customer, financial, and supplier data. This means that product data is vital for precise and consistent operations management.

One benefit of product data management is that it can improve product quality. Product quality can be improved through product data management. By preserving precise and full product data, organizations can identify and correct product problems, improve product features, and increase product quality.

Another benefit of product data management is product development. Product development can be streamlined with product information management. Businesses can save product development time and expense by retaining accurate and complete product data. Businesses may launch new items faster and more efficiently with this.

7. Reference data management

The management of reference data within an organization is the focus of reference data management (RDM), a critical component of master data management. Data that is used to categorize, classify, or otherwise define other data elements within an organization’s systems and applications are referred to as reference data. Product codes, industry codes, geographic codes, and other standard or regulatory codes are examples of reference data. In the last five years, reference data management has been frequently searched on Google.

Reference data management benefits

Reference data management benefits include:

  • Improved data quality: One of the benefits of reference data management is the ability to improve data quality . By maintaining accurate and complete reference data, businesses can ensure that data is properly classified and can be used effectively. This can help businesses make informed decisions based on high-quality data.
  • Improved data integration: Another benefit of reference data management is the ability to improve data integration . By maintaining accurate and complete reference data, businesses can ensure that data can be integrated across different systems and platforms. This can help businesses streamline their processes and reduce the risk of data errors.

If you have more questions about master data management, please contact us at:

This article was drafted by former AIMultiple industry analyst Yılmaz Doğukan Özlü.

External Links

  • 1. Google Trends

reference data management case study

Cem has been the principal analyst at AIMultiple since 2017. AIMultiple informs hundreds of thousands of businesses (as per similarWeb) including 60% of Fortune 500 every month. Cem's work has been cited by leading global publications including Business Insider , Forbes, Washington Post , global firms like Deloitte , HPE, NGOs like World Economic Forum and supranational organizations like European Commission . You can see more reputable companies and media that referenced AIMultiple. Throughout his career, Cem served as a tech consultant, tech buyer and tech entrepreneur. He advised businesses on their enterprise software, automation, cloud, AI / ML and other technology related decisions at McKinsey & Company and Altman Solon for more than a decade. He also published a McKinsey report on digitalization. He led technology strategy and procurement of a telco while reporting to the CEO. He has also led commercial growth of deep tech company Hypatos that reached a 7 digit annual recurring revenue and a 9 digit valuation from 0 within 2 years. Cem's work in Hypatos was covered by leading technology publications like TechCrunch and Business Insider . Cem regularly speaks at international technology conferences. He graduated from Bogazici University as a computer engineer and holds an MBA from Columbia Business School.

To stay up-to-date on B2B tech & accelerate your enterprise:

Next to Read

Top 10 data as a service companies in 2024, data quality in ai: challenges, importance & best practices in '24, data as a service (daas): what, why, how, use cases & tools in '24.

Your email address will not be published. All fields are required.

Related research

Data Mesh vs. Data Lake: Choosing the Right Tools in 2024

Data Mesh vs. Data Lake: Choosing the Right Tools in 2024

Data Preprocessing in 2024: Importance & 5 Steps

Data Preprocessing in 2024: Importance & 5 Steps

FOCUSED ON YOUR SUCCESS

Case Studies Getting Buy-In Customer Experience

Architect BI/Analytics

MICROSOFT ENTERPRISE

The Profisee Platform

reference data management case study

Get an overview of Profisee's cloud-native modern MDM platform — and learn why we're the obvious choice for master data management. Learn More

See the benefits of modern MDM

Connect any data source

Quickly & seamlessly manage data

Enforce governance rules & activate metadata

Seamlessly define & manage complex relationships

Create trusted golden records for your business

Maintain and measure data quality

Automate tasks and approvals

See all platform features

Native integrations with Azure apps and services

SaaS, PaaS, hybrid, on-prem

No-surprise, flexible pricing

Azure, AWS, GCP or any cloud

Leverage our expert team

Make it easy, make it accurate, make it scale

Our vision for the future of MDM

PROFISEE PARTNERS

Partners Home Cloud Partners Find a Partner

2023 DATA HERO SUMMIT

Watch the full Data Hero Summit on-demand to see leading data experts share their experiences and heroic stories. WATCH ON-DEMAND

Analyst reports, guides and other tools

The latest on data management & more

Join MDM experts live or on-demand

Relevant news from across the industry

How industry leaders use trusted data

Best-in-class training & support

Hear from data leaders first-hand

A chance to step into the discussions as they happen

Implement Reference Data Management Into Your Business Strategy

Save time and effort by standardizing reference lists and contextual data across your business with reference master data management (MDM).

reference data management case study

POOR REFERENCE DATA CAN AFFECT YOUR BUSINESS

Reference data impacts each and every part of your organization, providing context around your other enterprise data.

By tracking it manually in multiple systems, you run the risk of…

time with manual data prep

Unnecessary

operational costs

reporting and analytics

LEVEL-UP YOUR REFERENCE DATA WITH MASTER DATA MANAGEMENT

Managing reference data with MDM allows you to drive consistency across enterprise systems to improve data quality while eliminating manual clean-up for reporting and analytics.

Reference data management tools standardize common data attributes and records , so you can…

ELIMINATE operational costs of storing data in multiple systems

ACCURATELY inform and provide context around BI and analytics reporting

LEARN HOW PROFISEE TURNS YOUR DATA FROM  A LIABILITY INTO AN ASSET

reference data management case study

Learn How MDM Can Improve Reference Data

reference data management case study

Upgrade to Full-featured Data Management

Learn how customers standardized their reference data with profisee.

reference data management case study

Improving Patient Care with High-quality Data

reference data management case study

Verifying Customer Data to Increase Revenue Opportunities

reference data management case study

Operationalizing Reference Data for Revenue Growth

reference data management case study

SEE THE POWER OF PROFISEE MDM FOR YOURSELF

View a demo on-demand or schedule a custom demo to experience the Profisee platform first-hand and learn how it can help manage foundational reference data across your entire organization so you can finally launch your strategic business initiatives.

reference data management case study

WAYS TO GET STARTED

Resources for your journey, speak to an mdm expert, explore the profisee platform, see profisee in action, watch a demo on-demand or schedule a custom demo.

Hear product experts showcase the  industry’s only Adaptive MDM Platform with native Microsoft integrations .

REGISTER NOW

Profisee Master Data Management (MDM) is how enterprises can finally solve the data quality issues that have been holding back so many strategic initiatives.

reference data management case study

Corporate Office

Register Now for the
Data Hero Summit

Privacy Policy    |    Web Accessibility    |    Sitemap

Copyright © 2024 Profisee. All Rights Reserved.

LET'S DO THIS!

Complete the form below to request your spot at Profisee’s happy hour and dinner at Il Mulino in the Swan Hotel on Tuesday, March 21 at 6:30pm .

You're Registered!

Complete the form below to request your spot at Profisee’s dinner at Rosa Mexicano in the Dolphin Hotel  on Tuesday, March 11 at 7:30 PM ET

CDO Matters Podcast with Malcolm Hawker

NEW EPISODE

Top Data Trends of 2023

Wendy Turner-Williams Headshot

with Wendy Turner-Williams, Chief Data & AI Officer

Join Profisee at our upcoming event:

reference data management case study

REGISTER BELOW

Gaine Solutions

The Ultimate Guide to Reference Data Management in Healthcare

by Gaine Solutions | Aug 31, 2022 | Healthcare , Master Data Management

Reference data management in healthcare requires user interface management and reduces IT burden.

Data-informed decision-making increases operational efficiency and reduces organizational risk. In an enterprise system, reference data management in healthcare enables organization-wide consistency.

Industries that rely on vast quantities of information – such as healthcare systems – need a comprehensive method to store, categorize, and interpret this data across multiple applications. For example, a diagnosis like diabetes typically has a central code (i.e., reference input) that other system applications use to make sense of more complex information. 

Before constructing or personalizing your healthcare system’s reference data management protocol, evaluate a proposed software’s ability to facilitate data governance, system integration, and technological acquisition.

Key Takeaways:

  • Data-based decision-making reduces inconsistency and helps healthcare systems control costs, maximize profits, and improve value-based care. The enterprise suffers when the organization siphons data into disparate business units.
  • When creating a reference data management system in healthcare, executives should consider the importance of (1) data governance, (2) innovation planning, (3) content consistency, (4) management of data creep, and (5) integration of new tools.
  • Both payors and providers benefit from effective reference data management. The single source of truth established in such systems encourages operational efficiency, cost reduction, and optimized analytics. 

Reference Data Management in Healthcare

To understand reference data management , let’s first define “reference data.”

What Is Reference Data?

Reference data is any information you use to make sense of other information by categorizing inputs in a database or connecting that data to further enterprise inputs.

Reference data itself may be internally or externally defined.

Image Source: RDM | The Disruptive MDM / PIM / DQM List (mdmlist.com)

Examples of reference data include: 

  • Country codes
  • Exchange codes 
  • Postal codes 
  • Measurement units 
  • Products or pricing

Most reference data points remain consistent over time. For example, postal codes in the United States only change as population demographics shift over the long term. 

Other reference data points, such as country and currency codes, change more frequently. If your healthcare system relies on standardized codes that change often, an effective reference data management system must account for this “data creep.”

Reference data differs from master data. The former classifies and defines other data, while the latter is information about business enterprises. For instance, enterprise systems use reference data to create control logic and categorize data into understandable chunks to analyze master data. 

The Importance of Reference Data

Reference data plays an integral role in enterprise systems like healthcare providers. It impacts data quality, readability, and risk reduction. 

Uses of reference data include: 

  • Establishes data consistency: As the “base layer” of information about your company, reference data enables a uniform enterprise-wide understanding for designing common metrics. 
  • Enables data governance: Creating controls for data access facilitates accountability by key stakeholders and end-users. This promotes trust in the underlying data.
  • Source of truth: Use of a consistent reference data system contributes to a trusted single source of data enterprise-wide.
  • Improved data-informed decision-making: Creating a uniform source of truth relied upon by end-users throughout the organization enables the faster implementation of data insights

What Is Reference Data Management?

Managing reference data requires business leaders or IT systems administrators to manage classifications across enterprise systems and business units. This may include performing analytics on reference data, tracking changes to reference data over time, and distributing reference data across categories. 

Reference data management in healthcare facilitates value-based care and cost reduction.

1. Capable of Mapping Reference Data

A reference data management system must be adaptable. The system must enable end-users to apply the reference codes to their application-specific or use-case-specific purposes. 

Similarly, the system must allow maximum accessibility to these users, including remote subscribers, consumers, and healthcare providers. A reference data management that is unwieldy or lacks a user-friendly interface is ineffective. 

2. Provides for Management and User Experience

Hospital systems and other healthcare providers need enterprise-wide consistency . A reference data management platform should provide flexibility for installing, configuring, and importing data with minimal IT costs. 

At the same time, the data and system architecture must be secure. A reference data management system can provide role-based security, limiting access to the specific access for which end-users need the reference data.

3. Designed with a Robust Architecture and Performance Ability

Management systems may come with hierarchical management or interconnected sets of reference data. Hierarchical data structures allow for more complexity. 

With this complexity, a semantic architecture allows the system to manage relationships within reference data sets across time.

4. Reduced Burden on IT Systems

Formal governance of reference data prevents data decay and miscommunications when mapping relationships. Workflow processes that eliminate end-to-end lifecycle management reduce the burden on existing IT infrastructure while having the added benefit of improving data quality. 

5. Supports Interoperability Across Various Systems

While reference data generally remains static, the master data or other information it decodes changes more frequently. Your selected reference data management program should enable the import and export of reference data through multiple formats (i.e., flat files, databases, CSV/XML files, and others). 

6. Enables Automated Processes

A specifically defined automated process enables reference data to collaborate with other key data forms. As this data increases in complexity, automation facilitates data consistency and maintains the ease of user interface. 

Work with Gaine Solutions for Reference Data Management in Healthcare

Effective data management transforms the healthcare experience for both the provider and patient. Gaine Solutions is designed with the healthcare industry in mind and designed a reference data management protocol for that purpose. 

Contact us today to learn how Gaine can assist you in your data quality needs.

Opt-in with Gaine for More Insight

Great welcome to the mdm a-team you may unsubscribe at any time., more articles like this.

  • Case Studies
  • Contracting
  • Credentialing
  • Data Governance
  • Data Quality
  • Data Science
  • Health Plans
  • Interoperability
  • Life Sciences
  • Master Data Management
  • Network Adequacy
  • Network Management
  • Next-Gen MDM
  • Provider Data Management
  • Provider Groups
  • Provider Network
  • Provider Network Management
  • Uncategorized
  • Whitepapers

Finger pointing at superimposed medical graphics

5 Advances in Automated Claims Processing

Image of a medical professional with superimposed graphics representing healthcare compliance

The Handbook of Regulatory Compliance in Healthcare

Smiling team of healthcare professionals at a meeting

Implementing MDM: A Step-by-Step Guide

reference data management case study

Request Full Article

reference data management case study

Get Started

Your details have been received. We will be in touch. Thank you.

Something went wrong while submitting your details. Please refresh the page and try again.

What Is Reference Data Management?

reference data management case study

Reference data management is an essential component of data management. It allows organisations to maintain data accuracy and consistency across data sets. It helps organisations comply with regulations and reduce compliance risks.

This article discusses the purpose of reference data management. It also explores its key features, collaborative efficiencies, and the benefits it can provide.

Key Takeaways

  • Legacy solutions lack important features such as change management, audit controls, and granular security/permissions.
  • Errors in reference data can hurt business operations.
  • Mismatches in reference data can affect data quality, BI reports, and application integration.
  • Reference Data Management (RDM) solutions such as Tempo provide tailored configurations and ongoing support, helping organisations comply with regulations and mitigate compliance risks.

What Is Reference Data Management

Reference data management is the process of collecting, organising, and managing reference data used across multiple systems and applications. It provides a centralised control, ensuring consistency, compliance, scalability, and quick reaction to new data requirements.

It also helps organisations meet regulatory requirements, maintain data integrity, and improve operational efficiency. RDM solutions must have an extensible data model, user-friendly interface, and robust architecture to ensure its effectiveness.

Lastly, governance processes, metadata management, and integration with other solutions are necessary for end-to-end lifecycle management.

Why is Reference Data Management Important

A well-managed system for organising and maintaining information is paramount for businesses of all sizes. Reference Data Management (RDM) provides many benefits, such as:

  • Legacy solutions: Change management, audit controls, granular security/permissions
  • Compliance: Regulations, compliance risks, consistency
  • Connectivity: Access, distribution, updates
  • Lifecycle: Governance, metadata, data protection
  • External data: Standard practices, accuracy, reliability.

These advantages enable organisations to scale operations, improve data quality, and ensure compliance.

Reference Entity Profile

Entity profiles are an important component of RDM for providing accurate and up-to-date organisational information. They provide a comprehensive overview of an entity's characteristics, such as its classification, key contacts, and ownership structure.

Reference data management systems can store such information and support the process of maintaining profiles over time. Entity profiles allow organisations to keep track of changes in an entity's structure, and to ensure that reference data is uniform and consistent.

They are also used to generate reports and dashboards for better decision-making and to identify potential risks.

Key Features

Key features of reference data management systems include:

  • The ability to store organisational information
  • The ability to maintain profiles over time
  • The ability to generate reports and dashboards
  • The ability to identify potential risks

Reference data management systems offer:

Connectivity & Integration:

  • Multiple flexible means of connection
  • Import & export of data
  • Role-based security

Lifecycle Management:

  • Governance UI & workflow
  • Metadata management
  • End-to-end data management

Evaluation Criteria:

  • Types of data
  • Extensible data model
  • User-friendly interface
  • Robust architecture & performance

Attribute Analysis

Attribute analysis is a process used to assess the characteristics of reference data to determine its accuracy, completeness, and relevance. This includes identifying and analysing data attributes, assessing data quality, and verifying information accuracy.

Data attributes can include values, data types, formats, and structures. Quality assurance is used to check data accuracy, integrity, and consistency. Data verification ensures data meets specified criteria and is up-to-date.

Importing Reference Data

Importing reference data is critical for organisations to ensure data accuracy and consistency. It involves bringing in data from external sources and integrating it with internal data.

Organisations must have flexible connectivity options, user-friendly interfaces, and robust architecture to ensure successful importation. Proper governance and collaboration are also needed to ensure accuracy and consistency of reference data.

Organisations must have policies in place to ensure compliance with data governance standards.

Assigning Accountabilities

Accountabilities must be assigned to manage reference data to ensure compliance with data governance standards.

Reference data governance should involve SMEs for creating and maintaining data standards, regular reviews and updates, and departments held accountable for their internal reference data.

Collaboration is key, with all applications and users having access to current and synchronised data.

Benefits include faster turnaround time, reduced operational risk, improved regulatory compliance, and increased business resource optimisation.

Approach to Effective Management

Managing external data is another important aspect of effective management. It involves standard practices and improves data quality.

Governing reference data is also crucial. Subject matter experts should do this and regularly review to ensure accuracy and consistency.

Promoting collaboration is essential for effective management. It facilitates data sharing and enhances decision-making processes.

Realising the benefits of reference data management is the ultimate goal. These benefits include faster turnaround and reduced operational risk.

Governing Reference Data

Subject matter experts should govern reference data by creating and maintaining data standards. Regular reviews and updates are essential for accuracy and consistency.

Organisations should hold departments accountable for their internal reference data and ensure compliance with data governance policies. Collaboration and data sharing should be promoted to enhance operational efficiency and effectiveness.

Governance of reference data provides benefits such as improved regulatory compliance, enhanced operational metrics and performance monitoring, and faster turnaround times.

Implementation of governing policies and centralised data management can provide numerous benefits. These include faster turnaround time in obtaining error-free data, reduced operational risk, improved regulatory compliance, enhanced operational metrics, and increased business resource optimisation.

These benefits can be realised through improved data quality and reliability, seamless data integration, and better department collaboration. Additionally, reference data management facilitates policy enforcement and decision-making processes, allowing for more informed and timely decisions.

Improved Data Governance

Implementing improved data governance policies can help ensure the accuracy and consistency of reference data across applications and users.

Reference data should be governed by subject matter experts (SMEs) who are responsible for creating and maintaining data standards, regularly reviewing and updating the data, and ensuring compliance with data governance policies.

Collaboration is also essential, as all applications and users should have current and synchronised data.

Benefits of improved data governance include reduced operational risk through centralised data management, improved regulatory compliance and policy enforcement, and enhanced operational metrics and performance monitoring.

Quicker Turnarounds

Organisations can experience quicker turnarounds in obtaining error-free data by utilising reference data management solutions. Such solutions enable access, distribution, and updates of reference data across systems, facilitating quick reaction to new data requirements or market changes.

Centralised control ensures consistency and compliance, while user-friendly interfaces and flexible data models enable easy installation and configuration. Hierarchy management of reference code tables and values allows scalability of operations and analytics processes.

Reduced operational risk and improved regulatory compliance further enhance the benefits of reference data management.

Minimised Operational Risks

Utilising reference data management solutions can minimise operational risks. These solutions provide tailored configurations and ongoing support to ensure consistency and compliance. They enable quick access, distribution, and updates of reference data across systems while also allowing operations and analytics processes scalability.

Additionally, reference data management solutions provide robust architecture and performance, user-friendly interfaces, and support for import/export of data in multiple formats.

Enhanced Regulatory Compliance

Implementing reference data management solutions can enhance regulatory compliance. By centralising control and enforcing granular security permissions, organisations can reduce compliance risks and maintain compliance with regulations.

Data governance policies can be enforced, and regular reviews and updates of reference data can ensure accuracy and consistency.

These systems can also facilitate seamless integration and data sharing, allowing organisations to reduce operational risks and improve decision-making processes.

Collaborative Efficiencies

Centralising reference data management can facilitate collaboration and synchronisation of data across applications and users, thereby enhancing cross-functional efficiency. It enables users to access and share information quickly and accurately while providing a framework for governing and maintaining data quality.

Reference data can be managed in a secure, centralised environment, ensuring data consistency and regulatory compliance. Benefits include improved operational metrics, enhanced decision-making processes, and reduced operational risk.

Reference Data Management Evaluation

Reference data management evaluation involves assessing the ability of a system to manage various types of data. This includes evaluating its extensibility, user-friendliness, performance, security, and connectivity.

Additionally, the end-to-end lifecycle management capabilities of the system should be evaluated for compliance, scalability, and collaboration. This assessment ensures that organisations are able to manage their data effectively and efficiently.

Tempo Reference Data Management Solutions

In the rapidly evolving world of data-driven businesses, keeping reference data up-to-date and governed effectively is pivotal. Tempo, a pioneering creation by Clear Strategy, is at the forefront of this transformation.

A Game Changer in Real-time Updates: 

In the age of immediacy, delays and outdated information can pose serious setbacks. Tempo ensures that reference data remains fresh and up-to-the-minute. By supporting real-time updates, Tempo stands out as an indispensable tool for enterprises striving to maintain pace in the digital age.

Effective Governance Ensures Integrity:

Beyond speed, the accuracy and reliability of data is paramount. Tempo comes equipped with robust governance protocols, ensuring that your reference data remains consistent, valid, and trustworthy. This reduces errors and builds a foundation of reliability within your data platform.

Purpose-built for Comprehensive Management: 

Its meticulous design, tailored exclusively for managing Enterprise Reference Data, sets Tempo apart. Whether it's for streamlining operations, regulatory compliance, or enhancing analytics, reference data is a lynchpin. With its intuitive features, Tempo seamlessly integrates into your data platform, becoming a cornerstone for efficient data management.

Categorisation and Structuring Made Easy: 

Diverse business operations often require a vast array of categorised data groups. Tempo shines in its ability to categorise and structure various data types into essential groups effortlessly. Be it Customer Segments, Product Codes, Financial Hierarchies, or Country Codes, Tempo ensures organised and easily accessible reference data at all times.

In a nutshell, Tempo by Clear Strategy isn't just another addition to the world of Reference Data Management – it's a paradigm shift. Embrace the future of data management with Tempo, where real-time updates meet impeccable governance.

Benefits of Reference Data Management

The implementation of reference data management can provide numerous advantages to an organisation. These include:

  • Faster turnaround time and improved accuracy in obtaining error-free data
  • Reduced operational risk
  • Enhanced regulatory compliance and policy enforcement
  • Improved operational metrics and performance monitoring
  • Increased business resource optimisation and efficiency

Reference data management also facilitates:

  • Seamless integration and data sharing
  • Enabling organisations to collaborate and make informed decisions.

Frequently Asked Questions

What is the best way to ensure accuracy and consistency of reference data.

The best way to ensure the accuracy and consistency of reference data is to use standard practices to discover, profile, and understand it, keep it up to date, and regularly review and update it. Additionally, assigning subject matter experts (SMEs) to govern and hold departments accountable is essential.

How Can Reference Data Management Help Organizations Comply With Regulations?

Reference Data Management (RDM) can help organisations comply with regulations by providing tailored configurations and ongoing support, allowing operations and analytics processes to scalability, facilitating quick reaction to new data requirements, and ensuring semantic consistency.

What Are the Key Criteria to Consider When Evaluating Reference Data Management Solutions?

The key criteria to consider when evaluating reference data management solutions include the ability to manage various types of reference data, extensibility, user-friendliness, connectivity and integration, and end-to-end lifecycle management.

What Is the Role of Collaboration in Reference Data Management?

Collaboration is essential for reference data management, enabling users to share and synchronise data across applications and departments. This facilitates data standardisation, data quality, improved decision-making, and operational efficiency.

Reference data management is a critical component that enables organisations to maintain accuracy and consistency across data sets. It helps ensure compliance with regulations, reduce compliance risks, and enable faster and error-free data turnaround times.

Reference data management involves establishing a central reference data unit, managing external data, governing reference data, and collaborating. The key features of reference data management include attribute analysis and collaborative efficiencies.

Organisations can benefit from improved regulatory compliance, reduced operational risk, and increased business resource optimisation. Reference data management solutions such as Tempo can also help organisations maximise the benefits of reference data management.

Interested in reading more?

What Is Reference Data Management?

Latest Articles

Data Mastery: The 4 Pillars of CDO Success

Your are now subscribed.

Something went wrong while submitting the form. Please refresh the page and try again.

Big Data Moves Fast. Don't Wait.

reference data management case study

  • RegTech Insight
  • TradingTech Insight
  • Data Management Insight
  • ESG Insight
  • Sign up for our newsletter

Browse by brand

  • Data Privacy & Digital Identity
  • KYC / AML & Financial Crime (KH)
  • Regulations
  • Regulatory Data
  • Regulatory Reporting & Regulators (KH)
  • Regulatory Technologies
  • Risk Technologies
  • Data Centres, Connectivity & Colo (KH)
  • Data Delivery, Cloud & Managed Services
  • Digital Assets, DLT & Blockchain
  • Market Data & Analytics
  • Regulatory Reporting & Compliance
  • Trade Execution Technology
  • Trade Surveillance
  • Data Delivery Platforms, Cloud & Managed Services
  • Data Governance & Lineage (KH)
  • Data Science & Analytics
  • Entity Data, KYC and Client Onboarding
  • Regulatory Compliance
  • Data Management
  • Data Standards & Taxonomy
  • ESG Regulations
  • Ratings and Scores
  • Regulatory Reporting
  • Risk Management

Browse by content type

  • WHITE PAPERS
  • White Papers

Browse by Category

Find out more about A-Team Group, the company behind A-Team Insight. www.a-teamgroup.com

Newsletters

  • RegTech Insight Weekly
  • TradingTech Insight Weekly
  • Data Management Insight Weekly
  • ESG Insight Weekly
  • A-Team Group
  • A-Team Insight
  • @ateaminsight
  • @regtechinsight
  • @tradingtechins
  • @datamgmtinsight
  • @esginsight

Data Management Insight White Papers

reference data management case study

Share White Paper

The reference data utility: how goldman sachs, jpmorgan chase & co and morgan stanley are breaking the reference data mold.

The Reference Data Utility (RDU) built by SmartStream and backed by Goldman Sachs, JPMorgan Chase & Co, and Morgan Stanley is up and running and ready to deliver reference data management services to the banks.

The concept of multi-tenant data utilities is not new, but none have achieved buy-in at the level of the RDU, so what makes it different, how does it operate and will it fulfil the promise of reduced data management costs and increased data quality?

This White Paper, sponsored by SmartStream:

  • Describes how the RDU operates
  • Includes interviews with the founding banks
  • Discusses drivers for joining the RDU
  • Details improvements in data cost and quality
  • Outlines practical issues of participating
  • Forecasts further development of the utility
  • Provides you with a checklist of benefits

If you would like to commission a white paper to support your marketing campaigns, please contact sales

Share on Mastodon

reference data management case study

Data Topics

  • Data Architecture
  • Data Literacy
  • Data Science
  • Data Strategy
  • Data Modeling
  • Governance & Quality
  • Data Education
  • Enterprise Information Management
  • Information Management Slide Presentations

DataEd Slides: Essential Reference and Master Data Management

To view the on-demand recording from this presentation, click HERE>>>> About the Webinar Data tends to pile up and can be rendered unusable or obsolete without careful maintenance processes. Reference and Master Data Management (MDM) has been a popular Data Management approach to effectively gain mastery over not just the data but the supporting architecture for […]

To view the on-demand recording from this presentation, click  HERE>>>>

reference data management case study

About the Webinar

Data tends to pile up and can be rendered unusable or obsolete without careful maintenance processes. Reference and Master Data Management (MDM) has been a popular Data Management approach to effectively gain mastery over not just the data but the supporting architecture for processing it. This webinar presents MDM as a strategic approach to improving and formalizing practices around those data items that provide context for many organizational transactions: its master data. Too often, MDM has been implemented technology-first and achieved the same very poor track record (one-third succeeding on-time, within budget, and achieving planned functionality). MDM success depends on a coordinated approach typically involving Data Governance and Data Quality activities. 

Learning objectives:

  • Understand foundational reference and MDM concepts based on the Data Management Body of Knowledge (DMBOK)
  • Understand why these are an important component of your Data Architecture
  • Gain awareness of Reference and MDM Frameworks and building blocks
  • Know what MDM guiding principles consist of and best practices
  • Know how to utilize reference and MDM in support of business strategy

About the Speaker

Peter Aiken, PhD

Professor of Information Systems, VCU and Founder, Anything Awesome

reference data management case study

Peter Aiken, an acknowledged Data Management (DM) authority, is an Associate Professor at Virginia Commonwealth University, past President of DAMA International, and Associate Director of the MIT International Society of Chief Data Officers. For more than 35 years, Peter has learned from working with hundreds of Data Management practices in 30 countries. Among his 10 books are the first on CDOs (the case for data leadership), the first describing the use of monetization data for profit/good, and the first on modern strategic data thinking. International recognition has resulted in an intensive schedule of events worldwide. Peter also hosts the longest-running DM webinar series (hosted by dataversity.net). From 1999 (before Google, before data was big, and before Data Science), he founded Data Blueprint, a consulting firm that helped more than 150 organizations leverage data for profit, improvement, competitive advantage, and operational efficiencies. His latest venture is Anything Awesome.

This presentation is brought to you in partnership with:

reference data management case study

Leave a Reply Cancel reply

You must be logged in to post a comment.

Amurta

  • Reference Data Management

Reference Data Management (RDM) provides the processes and technologies for recognizing, harmonizing, and sharing relatively static data sets for “reference” by multiple constituencies (people, systems, and other master data domains).

Inconsistent or non-existent Reference data management (RDM) can be debilitating for a company because all systems in a company rely on the reference data as a standard. Without it, business intelligence reports can be inaccurate, and systems integrations may fail. 

reference Data managent

Reference Entity Profile

Helps create reference data to describe the entities and each attribute in it by using metadata.

Attribute Analysis

This requires metadata of the reference data and generates even more reference metadata of mapping attributes that need to be captured. 

reference data management case study

Data Analysis Repository

This includes facts about the reference dataset or individual codes. This helps users of the reference data to understand how to interpret and use it.

Import Reference Data

The import includes the capability of updating external or internal reference data metadata as per business needs.

reference data management case study

Assign Accountabilities

Assigning the accountabilities for all aspects of reference data management as per the reference dataset, particularly for internal reference datasets, this requires a rich set of reference data metadata elements.

Distribute Reference Data

reference data management case study

Establish a Central Reference Data Unit

Establishing a Reference Data Unit will help oversee data management across the organization. It is important to use this for data standardization, data quality, and operational goals to increase efficiency.  

Manage External Data

Use the previously established standard practices to discover, profile, and understand reference data. This data should be kept up to date.  

Govern Reference Data 

The Reference data should be governed by SMEs as standards are more likely to have been created by them, and the team should be aware of any changes to the reference data. Departments should also be held accountable for their internal reference data.  

Collaboration

Since reference could be used throughout the enterprise, it is needed that all applications and all users have current, synchronized data. This will ensure operational efficiency.  

reference data management case study

Govern Reference Data

reference data management case study

Faster Turn Around

RDM solutions not only automate the process of obtaining error-free data but also saves an enormous amount of time.

reference data management case study

Reduced Operational Risk

By defining and placing reference data in one central location and applying to enrich business data, users can speed up processes by reducing the operational risk and increase efficiency.

reference data management case study

Improved Regulatory Compliance

RDM application simplifies the challenges related to security breaches and regulatory policy enforcement thereby increasing Regulatory compliance.

reference data management case study

Since reference could be used throughout the enterprise, it is needed that all applications and all users have current, synchronized data. This will ensure operational efficiency.

reference data management case study

  • A leading Retail Customer.
  • Improve the Data Quality and omni channel experience for their consumers .

Business Challenge

  • Automate multiple source data processing using a single platform.
  • Need to maintain a single source of truth by matching and merging the duplicate records.
  • Managing future data enrichment in a robust and faster way.

Project Requirements

  • Create an MDM implementation to maintain Consumer (individual) and Customer (Business) records.
  • Integration of different source data.
  • Cleansing the data with given business validation data quality rules.
  • Defining the data management strategy to master current and future consumers and customer data.

Solution Highlights

  • Created an automated process to integrate for the data collected from multiple sources.
  • Managing the Data Quality requirements for the selected attributes in Data Asset through the standard business validation rules.
  • Assessing the quality of the data source by generating operational reports.
  • Data model to support the feeding source system with require changes as per the business requirement so that further transactions can be initiated without any confusion.
  • Easy edit for Master data through MDM data Browser.
  • Different data models for various specifications as per business (e.g. Customer, Partner, Sales, etc.)

AMURTA Value

  • Supported in managing the operational metrics as per business validation rules.
  • Reduced the time taken to view all merged consumers' information as per their corresponding roles with the compilation of multiple source data.
  • Key metric information was provided in near real-time to business executives.
  • Related operational Dashboard updated.
  • Business found the number of consumers with specific consumer role to improve the performance in Sale.
  • Improved business resource optimization.

SPEAK TO OUR EXPERTS TODAY

If you have queries  we are ready to discuss how our Data Insights Platform can help you in improving your organization governance process.

Benefits of Reference Data Management

  • Platform Overview
  • Data Governance
  • Master Data Management
  • Data Quality
  • Manufacturing
  • Pharmaceutical
  • Data Steward

Privacy Overview

Amurta’s dip – demo, amurta’s mdm – demo, amurta’s partner program.

  • Yes! I would like to receive communications about the quotation details
  • Yes! Please email me news
  • EntityMatch.com
  • ScreenCustomer.in

Platform Overview

Posidex's Prime MDM, which is Customer MDM platform provides our clients a game changing ability to manage the customer data in real-time with great accuracy and integrity thereby strengthening the scope of the processes in Customer Onboarding, KYC Due Diligence, Operations, Marketing, Fraud Detection, Risk Management, Compliance and Customer Experience.

Master Data Management

Prime MDM Platform powered by PrimeMatch® technology enables your organization to build one master record or the golden record of every unique customer across all your business source systems. This golden record of the customer can be linked to transactional data in real time to generate multiple contextual views facilitating the organization to drive better customer engagement.

Customer Data Platform

Customer data platforms that simply rely on unique identifiers to reconcile or identify the customers, expose enterprises to skewed customer related analytics, putting organizations to operational as well as regulatory risks. Prime CDP leverages Machine Learning, inbuilt data quality management, sophisticated entity resolution and In Memory analytics to provide a near instant and highly accurate single view of your customer.

Point Solutions

Cloud solutions, by industry, by function, products overview.

With patronage of over 18 years and innumerable installations integrated with some of the most critical applications, Posidex's products are closely aligned with Industry requirements for managing their customer data efficiently. Deep dive into each of the products which are made for the world and proud to be made in India.

Prime360 (powered by PrimeMatch® technology) is a product for Real-time Entity Search and Match. Helps facilitate any interacting system to perform an identity search within the target data collated from various source systems. Offered in three variants viz., Prime360 Lite, Express and Advanced.

CLiP (powered by SetMatch technology) is used for One-time de-duplication, UCIC (Unique Customer Identification Code), Golden Record, Family ID Creation and/or to meet any incremental matching requirements for identifying unique customer records across various products and lines of businesses. CLIP plays a pivotal role when you want to establish linkages between records where there is no unique Identifier across the data sources.

ScreenCustomer.com is a cloud based Watchlist ‘Name scanning’ bureau service for matching prospective or current customer data with various lists like Federal Reserve/RBI list, OFAC, UN Sanctions list, PEP etc as part of Regulatory Compliance, CFT (Combating Financing of Terrorism), KYC (Know Your Customer) and other EDD (Enhanced Due-Diligence) requirements.

PrimeVer helps automate the process of validation of the customer information coming from multiple providers making the process simple, streamlined and quick. This unlocks the organization’s potential for faster onboarding and quick decisioning with the respect to the suspect cases.

PropEx is a property de-duplication solution used for Movable and Immovable assets. It helps in identifying unique properties to avoid re-mortgaging of the same property. For Mortgage loans, it is imperative to ensure that the property being financed is not previously financed. Apart from this, it will be useful to determine the complete exposure of the Project that is being considered for funding and the builder’s exposure.

POSCentral Know Your Customer is built on the basis of an initiative taken by the Indian Government. It enabled customer to complete their KYC process only once across the financial sector. The objective is to reduce the burden of producing KYC documents by customers every time they interact with a different Service Provider.

Technology Overview

Posidex's innovative PrimeMatch and SetMatch Technology are game changing approaches of integrating and serving the customer data instantly to all the business source systems for the purpose of customer due diligence. Explore the exciting PrimeMatch and SetMatch Technology.

PrimeMatch® is a path breaking search technology for entity resolution and analytics providing the benefits of scalability to millions of records with great accuracy with an instant response. PrimeMatch® delivers the highest possible reliability when searching, matching, screening or grouping data based on demographic parameters of identification in spite of errors and variations in each parameter.

The deduplication process is a problem of quadratic complexity which means the number of iterations goes up exponentially as the volume of data increases. Conventional approaches takes a long time and significant hardware resources. SetMatch technology can achieve such large scale deduplication and matching process in a fraction of time and with significantly less hardware resources.

Resources Overview

Thank you for visiting our website. We have pooled a lot of technology literature, white papers, case studies for your benefit. Deep dive and explore the various facets of the innovation created by Posidex.

About Posidex

Team posidex, company overview.

POSIDEX Technologies’ innovative Enterprise Customer Data Platform, with its proven sophisticated entity resolution technology based on Machine Learning and InMemory Analytics, has transformed the way businesses operate and engage with their customers during their entire life cycle with the organisation.

master data vs reference data

Unveiling the Distinctions: Reference Data Management vs Master Data Management

  • Post author By admin
  • Post date June 8, 2023
  • Categories In Master data Management
  • No Comments on Unveiling the Distinctions: Reference Data Management vs Master Data Management

reference data management case study

What is Reference Data Management?

What is master data management, key differences between rdm vs mdm:, scope and focus, data management objectives:, integration and governance:, data hierarchy:.

reference data management case study

Importance of Reference and Master Data Management

Importance of reference data management:, ensuring data consistency:, supporting data integration:, improving decision-making:, importance of master data management:, enhancing data quality:, supporting business processes:, enabling data governance:, facilitating integration and interoperability:.

In conclusion, the differences between rdm vs mdm and their importance in data governance cannot be overstated. It is essential to have a robust Reference and Master Data Management strategy in place, as they play a crucial role in ensuring data consistency, improving decision-making, supporting regulatory compliance and risk mitigation efforts, and enhancing operational efficiency.

If you are looking to implement Reference and Master Data Management solutions, Posidex can help. Contact us today to learn more about our comprehensive data management solutions and how we can assist your organization in achieving data excellence.

Recent Post

“Customer 360 degree View”-An Essential Framework for Data-Driven Business Growth

“Customer 360 degree View”-An Essential Framework for Data-Driven Business Growth

How to Implement Data Observability in Master data management

How to Implement Data Observability in Master data management

Understanding the Differences Between Data Mesh vs Data Fabric

Understanding the Differences Between Data Mesh vs Data Fabric

Leave a reply cancel reply.

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

U.S. flag

An official website of the United States government

Here’s how you know

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Reference Data Challenge

National institute of standards and technology (nist).

The Reference Data Challenge was a call to action for app developers to help improve the way NIST shares scientific reference data. Scientists and engineers need data—from the atomic weight of carbon and the structure of benzene to the most precise value for the speed of light. High quality physical and chemical reference data help researchers design experiments, build better products, solve health and environmental problems and even study the stars. NIST provides some of the most accurate and comprehensive datasets in the world, known as standard reference data (SRD). In this challenge, entrants created an app that used one (or more) of six popular SRD. A panel of judges—including internet pioneer Vint Cerf and the Department of Commerce Chief Data Officer Ian Kalin—selected the winning apps based on the apps' potential impact, creativity and innovation, implementation and use of SRD.

The challenge ran over a two-month period and was hosted on the Devpost website. A total of $45,000 in prizes was offered, with first place receiving $30,000, second place $10,000 and third place $5,000.

As a result of the challenge, 25 new apps were created using NIST reference data, helping to initiate a modernization of NIST's publically accessible data. Over 130 participants registered on the site, building interest in NIST reference data. The contest also stimulated the growth of at least one new company: the first place winner, Meru Apps LLC, intends to develop products that will help laboratory scientists use data more effectively.

Reference Data Challenge

Kris Reyes, winner of the NIST Reference Data Challenge, works in the lab. (Photo Courtesy of Meru Apps)

Hidden Signals Challenge Logo

The top prize of $30,000 went to Kris Reyes from Meru Apps in Princeton, N.J. His app, Meru Lab Reference, allows users to quickly access NIST chemical species data with the tap of a near-field communication (NFC) tag, a smart chip that is able to store digital information and share it with a smartphone. The second place prize of $10,000 went to college students Zachary Ratliff from Waco, Texas, and Daniel Graham from Danville, Ky., for their Lab Pal app that is a "go-to" tool for scientists and engineers. The third place award of $5,000 went to a team from MetroStar Systems in Reston, Va., whose app, ChemBook, provides multimedia information about chemical elements from NIST and other sources. Honorable mention awards (no cash prize) went to Andy Hall's SciCalc9000 app, a scientific calculator that integrates NIST SRD, and Annie Hui and Neil Wood of R-Star Technology Solutions, whose Thermocouple Calibrator app is a handy tool for converting between voltage and temperature. This challenge kicked off a bigger NIST open data commitment to improve the accessibility of SRD for app developers and other users of this valuable scientific data.

Areas of Excellence

Area of Excellence #1: "Estimate Budget and Resources"

NIST personnel were involved in the design and execution of the Reference Data Challenge. In addition to a significant amount of the challenge manager's (a NIST employee) time, staff in the NIST Office of Information and Systems Management (OISM) were employed to generate machine readable (JSON) data files. NIST research and technical staff were consulted to help identify and prepare the data. NIST's Office of the Chief Counsel also dedicated time for scoping and finalizing the challenge rules and associated documentation per America COMPETES Act requirements. The challenge was hosted on a website provided by Devpost (formerly Challengepost). The prize funds, challenge website hosting and OISM time were billed to an account established for the purpose of the challenge. Other staff time was provided in kind or as part of established responsibilities. In total, executing the challenge used approximately one-fifth FTE (challenge manager) and two FTEs at 10 percent effort, and $37,500 for IT services (data preparation and challenge website).

Area of Excellence #2: "Develop Terms and Conditions"

The NIST challenge manager coordinated closely with NIST's Office of the Chief Counsel to develop a comprehensive set of rules that provided clarity to participants about the requirements and expectations for their engagement in the challenge, while maintaining compliance with requirements stipulated under the America COMPETES Act authority. The terms and conditions (or rules, in this case) were developed with the goals of the challenge in mind: to increase awareness of important NIST scientific data resources and to stimulate innovative uses of NIST data as part of a larger open data commitment. Below are some aspects to NIST's development of the challenge rules, which were posted in a notice in the Federal Register .

  • Eligibility: This competition was open to all individuals over the age of 18 that are residents of one of the 50 United States, the District of Columbia, Puerto Rico, the U.S. Virgin Islands, Guam, the Commonwealth of the Northern Mariana Islands or American Samoa; and to for-profit or nonprofit corporations, institutions or other validly formed legal entities organized or incorporated in, and which maintain a primary place of business in, any of the preceding jurisdictions. An individual, whether participating singly or with a group, was required to be a citizen or permanent resident of the United States.
  • Intellectual Property: NIST did not make any claim to ownership of entries, consistent with the challenge goal of stimulating innovative app solutions by individuals outside of NIST.
  • Minimum Criteria: The rules provided a list of minimum criteria for consideration for a prize. While some of these were stipulated by the legal authority used (e.g., no use of NIST or Department of Commerce logos), additional criteria were added to support robust, complete apps. For example, the integrity and acknowledgement of NIST data was an important aspect of apps resulting from this challenge. Therefore, one criterion stipulated that the app include a statement that "This product uses data provided by the National Institute of Standards and Technology (NIST) but is not endorsed or certified by NIST," and that the NIST standard reference data numbers be displayed prominently.
  • Freely Available Data: NIST required that each submission use at least one of six required datasets posted on the NIST website and on data.gov. The rules allowed for the addition of other freely available datasets that enhanced the usefulness of the app for other users. In this competition, there were no requirements for open source solutions.

Area of Excellence #3: "Pay Winners"

Upon determination of the three winners by the judging panel, the challenge manager immediately informed the winners that they were being considered for a prize. To claim the prize, they were required to complete a winner verification form, which verified their eligibility and confirmed they were qualified to participate in the challenge. Upon receipt of this completed form, NIST initiated payment processing in a timely manner in line with the public announcement of the winners.

Challenge Type

This app challenge required solvers to use one (or more) of six valuable NIST scientific datasets, helping to promote awareness and use of NIST reference data. The first place winner started a company and is developing his app for wide use by scientists and engineers.

Legal Authority

America COMPETES Act (15 U.S.C. § 3719)

Challenge Website

nistdata.devpost.com

Kris Reyes Explains His Winning Submission

Reference architecture design for farm management information systems: a multi-case study approach

  • Open access
  • Published: 01 June 2020
  • Volume 22 , pages 22–50, ( 2021 )

Cite this article

You have full access to this open access article

  • J. Tummers 1 ,
  • A. Kassahun 1 &
  • B. Tekinerdogan   ORCID: orcid.org/0000-0002-8538-7261 1  

9398 Accesses

17 Citations

5 Altmetric

Explore all metrics

One of the key elements of precision agriculture is the farm management information system (FMIS) that is responsible for data management, analytics and subsequent decision support. Various FMISs have been developed to support the management of farm businesses. A key artefact in the development of FMISs is the software architecture that defines the gross level structure of the system. The software architecture is important for understanding the system, analysing the design decisions and guiding the further development of the system based on the architecture. To assist in the design of the FMIS architecture, several reference architectures have been provided in the literature. Unfortunately, in practice, it is less trivial to derive the application architecture from these reference architectures. Two underlying reasons for this were identified. First of all, it appears that the proposed reference architectures do not specifically focus on FMIS but have a rather broad scope of the agricultural domain in general. Secondly, the proposed reference architectures do not seem to have followed the proper architecture documentation guidelines as defined in the software architecture community, lack precision, and thus impeding the design of the required application architectures. Presented in this article is a novel reference architecture that is dedicated to the specific FMIS domain, and which is documented using the software architecture documentation guidelines. In addition, the systematic approach for deriving application architectures from the proposed reference architecture is provided. To illustrate the approach, the results of multi-case study research are shown in which the presented reference architecture is used for deriving different FMIS application architectures.

Similar content being viewed by others

reference data management case study

The multi-criteria evaluation of research efforts based on ETL software: from business intelligence approach to big data and semantic approaches

Chaimae Boulahia, Hicham Behja, … Zoubair Boulahia

reference data management case study

A holistic approach to environmentally sustainable computing

Andrea Pazienza, Giovanni Baselli, … Maria Vittoria Trussoni

reference data management case study

Toward an ontology for EA modeling and EA model quality

Jan A. H. Schoonderbeek & Henderik A. Proper

Avoid common mistakes on your manuscript.

Introduction

The continuous developments in information technology (IT) over the last decades have a substantial and increasing impact on various industrial domains. In recent years, IT and the various developments such as the internet of things (IoT), cloud computing and machine learning have also shaped the farming domain in which digitization is increasingly adopted to enhance the effective and efficient realization of the various farm business processes. For example, with the help of IoT, farming practices such as yield monitoring, cultivar selection, pest management, irrigation, etc. can be applied more precisely (Köksal and Tekinerdogan 2019 ). In this context, farm management information systems (FMISs) are being developed to manage the large amount of information that is involved in these processes, and likewise to keep track of and support the farm activities. The rapid development of FMISs over recent years has been partly driven by progress in precision agriculture (Hewage et al. 2017 ). FMISs can play a significant role in precision agriculture by providing functions for site-specific purposes (Fountas et al. 2015a ) and can reunite the different parts needed for precision agriculture (Nikkilä et al. 2010 ; Stafford 2000 ). Each FMIS supports various features such as milk yield management in dairy farming, fertilization management in arable farming and financial management in all types of farming. Examples of FMISs are Agworld, FarmWorks or 365FarmNet (Paraforos et al. 2016 ). Each FMIS focuses on one or multiple domains of the agricultural sector, for example, livestock or arable farming (Tummers et al. 2019 ).

One of the critical artefacts in the design of FMISs is the software architecture (Köksal and Tekinerdogan 2019 ; Tekinerdogan and Köksal 2018 ). Bass et al. ( 2003 ) defined the software architecture of a program or a computing system as:”The structure of the system, which comprise software components, the externally visible properties of those components and the relationships among them.” An architecture-based development of FMISs has several benefits including support for communication among the stakeholders, guidance of the design decisions and evaluation of the system. For proper designing and documenting of the software architecture, usually the guidelines as defined in the software architecture domain are followed. With this, software architecture is documented according to the various stakeholder concerns and, for this reason, multiple different so-called software architecture views are used (Clements et al. 2010 ). In this case, for the agricultural domain, multiple stakeholders have their concerns, which can be mapped using specific architecture views. A particular type of software architecture that is generic and can be used to help design specific software architectures of multiple software systems is the reference architecture (RA). An RA is a generic design that can be used to derive specific application architecture (AAs) based on the identified stakeholder concerns. RAs can help to design AAs more quickly and also with higher quality. For this, it is needed that the RA itself defines the proper scope of the domain and is well-documented.

The objective of this article is to develop a validated RA dedicated to FMISs following well-established architecture design methods. This study builds on the results of an earlier study in which a systematic literature review (SLR) was conducted on the state-of-the-art of FMISs (Tummers et al. 2019 ). In the earlier study, features that are implemented in current FMISs and obstacles that are related to the development and usage of FMISs were systematically identified. In the literature, several RA designs have been proposed for FMIS. Unfortunately, in practice, it seems less trivial to derive the AA from these RAs. Two basic reasons could be identified for this. First of all, it appears that the proposed RAs do not specifically focus on FMIS but have a rather broad scope of the agricultural domain in general. Secondly, the proposed RAs do not seem to follow the proper architecture documentation guidelines as defined in the software architecture community, lack precision and thus impede the design of the required AAs.

To fulfil the objective of this article, a new RA is proposed that was developed based on the findings of the earlier study and experiences in designing software architectures. The RA is dedicated to the specific FMIS domain and is documented using the software architecture documentation guidelines. In addition, the systematic approach for deriving AAs from the proposed RA is provided. To illustrate the approach and to validate the RA, a multi-case study protocol was conducted in which different AAs were derived from the proposed RA. Based on these experiences, the experiences and lessons learned for deriving the AA from the proposed RA are reported.

This section presents the summary of the results of the earlier SLR (Tummers et al. 2019 ) that was conducted, and then describes the background on RA design and documentation.

Currently, many different FMISs have been proposed that can be used to support farm management activities (Fountas et al. 2015a ; Capterra 2020 ; Köksal and Tekinerdogan 2019 ). According to Sørensen et al. ( 2010 ), an FMIS is defined as: “a planned system for the collecting, processing, storing and disseminating of data in the form of information needed to carry out the operations functions of the farm.” The FMIS is a management information system (MIS) for the agricultural domain. Waston et al. ( 1991 ) defined the MIS as: “an organizational method of providing past, present and projected information related to internal operations and external intelligence.” Over the years, FMISs have developed from simple recording systems into extensive FMISs (Fountas et al. 2015a ). Where the MIS can support decision making by providing timely information about the planning, control and operational functions of an organization (Waston et al. 1991 ), the FMIS does the same for the agricultural domain. Currently, the primary goal of FMISs is to reduce the production costs, maintain high quality and to comply with agricultural standards (Fountas et al. 2015a ).

In Tummers et al. ( 2019 ), an SLR was conducted in order to identify the state-of-the-art of FMISs. An SLR makes it possible to identify, evaluate and interpret all available research relevant to a particular research question, topic area or phenomenon of interest (Kitchenham et al. 2009 ). With the help of a review protocol based on the guidelines presented in Kitchenham et al. ( 2009 ), this study identified 1048 papers of which 38 primary studies were selected and analysed in further detail, these are presented in A. This SLR documented the commonly used features and the commonly encountered obstacles to the development and adoption of FMISs. A feature was described as a user-visible characteristic of an FMIS, and an obstacle was described as a problem related to the development or the use of FMISs. These 38 studies have all been published since 2008 and passed various selection and quality selections. With the detailed analysis, 81 unique FMIS features, 53 unique obstacles of FMISs and multiple (reference) architectures were identified. The features are presented in Appendix B , the obstacles in Appendix C and the RAs are further discussed in the “ Current reference architectures ” section. Furthermore, a unique set of 22 stakeholders and their concerns regarding the development of FMISs was identified, which are presented in Table 1 . For this table, the stakeholders relevant to the development of an FMIS were considered. Herein the definition of stakeholder given by the project management institute (PMI) (project management institute 2013 ) was adopted: “an individual, group, or organization, who may affect, be affected by, or perceive itself to be affected by a decision, activity, or outcome of a project”.

To model the common and variant features of the FMIS domain, feature modelling was adopted. In the feature model, a “feature” is defined as “a prominent or distinctive user-visible aspect, quality or characteristic of a software system or system” (Kang et al. 1990 ). The feature model is a tree-shaped model that shows the common, alternative and optional features of a system (Kang et al. 1990 ). The feature model for the FMIS is presented in Fig.  1 and shows the features for the FMIS split up into four main features: general MIS, data management, crop farming and animal farming. The general MIS feature contains the features that are not agriculture-specific and are found in MISs over multiple sectors. The data management feature contains features that are related to the management of the data and data-based decision making. The crop farming feature contains the sub-features directly related to the farming of crops, and the animal farming feature contains only the features that are directly related to the management of animals. The features are based on the features from the previous study (available in Appendix B ) where features that could be sub-features of another feature are shown under the most general feature. The precise definition of the features can be seen in the previous SLR (Tummers et al. 2019 ), where the sixteen most occurring features from the literature were described.

figure 1

A downsized version of the Feature model for the FMIS. Numbers on the right of the features show the number of sub-features not shown

  • Architecture design

Every software system of interest has an architecture that determines its structure. The software architecture is an abstract representation of a system that presents the gross-level structure. The architecture is important for supporting communication among stakeholders, for guiding design decisions and for analysis of the overall system (Tekinerdogan and Uzun 2019 ). There are two traditions of software architecture design: informal and formal. Informal architecture design does not follow a particular modelling technique and uses simple boxes-and-lines models for representing the architecture. The formal and well-established approach uses multiple views of an architecture description following the ISO/ISEC/IEEE 42010 standard (ISO/IEC/IEEE 42010 2011 ).

The conceptual model for the core architecture description from the ISO/ISEC/IEEE standard is presented in Fig.  2 . The architecture description has one or multiple views, and each view must have a viewpoint. Each view addresses one or more of the stakeholder concerns. The architecture view describes the architecture of a system in alignment with the conventions and rules in its architectural viewpoint.

figure 2

Adapted from the ISO/ISEC/IEEE 42010 standard (ISO/IEC/IEEE 42010, 2011 )

Part of the content model of the core architecture description.

According to Clements et al. ( 2010 ) and their views and beyond (V&B) approach, each view has a different style. They divide the views into four styles: module, component-and-connector, allocation, and hybrid style. Each style can again be divided into a total of seventeen sub-styles. The module style addresses concerns related to the implementation where the component-and-connector style has views related to the interaction structure. The allocation style has views that describe how software elements are allocated to the environment of the system. A design that can generalise multiple software architectures from a specific domain is an RA. According to the U.S. department of defense, an RA is: “an authoritative source of information about a specific subject area that guides and constrains the instantiations of multiple architectures and solutions (US Dept. of Defence/Office of the DoD CIO 2010 ).” This source of information can be obtained by documenting the relevant views for the architecture, where multiple views together make the RA. The RA should serve as an architecture blueprint for future architects and should provide a common lexicon, taxonomy and (architectural) vision (Muller 2012 ). Furthermore, the RA should encourage the use of common standards, specifications and patterns for the FMIS (US Dept. of Defence/Office of the DoD CIO 2010 ).

RAs are usually developed either through a collaboration of diverse organizations (through a “committee”) or by an organization that has multiple and diverse customers. To be useful, the RA should obey the following criteria (Muller 2012 ): Understandable for all stakeholders, accessible and read/seen by the majority of the organization, addresses the critical domain issues, satisfactory quality, acceptable, up-to-date and maintainable, adds value to the business. According to Kassahun ( 2017 ), there are three different scenarios to apply an RA which are presented in Fig.  3 . The RA can be used as a reference for the development of an AA. In this study, the AA is defined as the software model of a particular application presented by a combination of architectural views. First, an RA needs to be made, which can be used for the development of the AA. The RA consists of multiple views, which can each be used for the derivation of the same views in the AA. Each selected set of views for the RA will therefore also lead to a corresponding set of views for the AA as can be seen in Fig.  4 .

figure 3

Adopted from Kassahun ( 2017 )

Three scenarios of applying an RA.

figure 4

Methods used for deriving the AA. Each view from the RA will lead to a view in the AA

Research methodology

To address the earlier defined issues on the proper scoping and documentation of RAs, the objective of this study was to develop an RA dedicated to FMIS and documented according to the current architecture documentation guidelines. To fulfil this objective, the following research questions were defined.

RQ1: What are the existing reference architectures for FMISs?

RQ2: How to design a feasible reference architecture for FMISs?

RQ3: How can an application architecture be derived from the reference architecture?

RQ4: How effective is the designed reference architecture for deriving application architectures?

The first research question focuses on identifying the preliminary work on RAs for FMISs. This research question will be answered using the studies coming from the SLR that focus on architectures. The results of this research question are available in the “ Current reference architectures ” section. The second research question aims to derive a feasible RA that covers both a broad domain of systems and is designed according to the existing well-established architecture design knowledge. For this, design science research was used and the guidelines for designing architectures as they are defined in the software and system architecture design community. The results for this research question are presented in the “ Reference architecture ” section.

The third research question aims to guide developers to derive an AA from the RA; the results of this question are presented in the “ Method for deriving an application architecture ” section. The fourth research question aims to analyse how effective the designed RA for deriving AAs was. For this, multi-case study research was applied in which three case studies to illustrate the application of the RA were undertaken. With the help of multi-case study research, the effectiveness of the proposed RA could be qualitatively assessed, taking the obstacles (available in Appendix C ) from the previous study into account. The results of this research question are presented in the section “ Evaluation of the reference architecture: a multi-case study approach ”.

Figure  5 shows how the research questions and their outputs are related. The first step was the SLR, which was done in the previous study. The SLR had the stakeholders and their concerns, features and obstacles as an output. In the second step, the actual design of an RA took place; this was started with the decision as to which views were needed based on the stakeholders and their concerns. If requirements were missing or not clear, a step back was made. The design of the RA had a set of views as output, which was evaluated with a multi-case study in the third step. The RA was evaluated with one case study with a project manager of an FMIS under development, one case study with a farmer that uses an FMIS daily and one with a developer of a commercially available FMIS. The case studies used the developed RA to make an AA of an existing FMIS. The information from the multi-case study was used again to improve the RA in the second step, and after the evaluation, the RA was finished.

figure 5

The iterative method used in this study for deriving an architectural design

Current reference architectures

The SLR from the previous study (Tummers et al. 2019 ) identified studies that presented an RA for FMISs and identified a few software architectures that can be used as a reference but are not named as such. The four prominent studies that presented RAs are presented in Table 2 and further discussed below.

Kruize et al. ( 2016 ) presented an RA that can be used to map, assess, design and implement farm software ecosystems that contribute to integrated FMISs. The farm software ecosystem in this study should allow for integration between ICT components on the farm. Based on requirement analysis, they presented a unified modelling language (UML) class diagram that described the relationships between the various components in the farm software ecosystem. These components were: the application component, the ICT component, the information system and the agri-food company. Furthermore, relationships between several actors of a farm software ecosystem were shown. These actors were: the software vendor, the agriculture service provider, the agri-food company, and the infrastructure provider. They presented the layout of the farm software ecosystem and showed how different components and actor roles were related to each other. From this diagram, it can be seen that both the end-user (the agri-food company) and sensor use the FMIS, both of which have their own application with their own user interface. Furthermore, it can be seen that the FMIS consists of multiple applications from multiple software vendors and that the FMIS contains a cloud node. The study from Kruize et al. ( 2016 ) focused on a structure that allowed for the integration of multiple FMISs, in contrast to this study that focuses on an RA for a single FMIS.

López-Riquelme et al. ( 2017 ) described a cloud-based software architecture for precision agriculture. They used FIWARE community ( 2020 ) as a cloud provider and used multiple components to develop an application that had the aim of reducing the amount of water needed for irrigation tasks. They presented the main hardware and software needed for this application. They furthermore presented a database structure and a graphical representation of the data flow. These diagrams are mainly focused on sensor data which is being sent via a GPRS node. The models in this study do, however, not follow formal modelling techniques [UML or comparable (Van Vliet 1993 )] but give descriptions based on boxes and lines. In contrast to the goal of this study, the architecture given in López-Riquelme et al. ( 2017 ) was only focused on sending and receiving sensor data, which is only a part of the full FMIS.

Kaloxylos et al. ( 2014 ) presented a software architecture which can be considered as an RA for a cloud-based FMIS with a service-oriented architecture that could serve as a marketplace of services for farmers. They presented an ArchiMate (Open Group 2012 ) model of the system and also provided a proof of concept implementation. The FMIS was running on a cloud service and consisted of a monitoring service, data collector, data analyser, FMIS data storage service, notification Service, coordination module and an execution module. Furthermore, there was a local FMIS which had a logic controller, and there was a copy of the local FMIS. The cloud FMIS is connected with external services which include, monitoring reference services, solution reference services, resource management, scheduling and E-agriculturist service. Also, the equipment provider, pulling unit and implement, and the actuator are connected to the local FMIS. They performed an implementation of the architecture with field tests in a greenhouse. Kaloxylos et al. ( 2014 ) did not go into depth of the FMIS architecture but presented the business logic for the case of a network connection failure, sensor problem and monitoring of service consumption. They offered much detail about the used server configuration, programming language, data schemas and databases used; this is, however of too much detail for the RA in this study.

Nikkilä et al. ( 2010 ) presented a software architecture for FMISs in precision agriculture. They started with the requirement analysis and, based on these requirements, they presented a software architecture that was structured as a web application. The FMIS should be able to create operational plans and store most of the data from the field operations as documentation. They presented the FMIS architecture from the viewpoint of the developer, starting with data storage and application logic, which are together considered as the FMIS. The application logic contains a communication, a data transformation, a class library, and a load balancing module. Data storage includes authentication, general FMIS and geographic information system (GIS) databases. The users, services, and authorities that use the FMIS via the data transfer (internet) are the human users, web services, ISO 11783 [International office for standardization (ISO) ( 2017 ] (allows communication with tractor-implement combinations) and other services. There was also a detailed view of the application logic presented, which contained the communication used, data transformation, class library and load balancing protocols and methods. This study, however, only showed two diagrams (based on boxes and lines) and the RA was mainly presented textually.

  • Reference architecture

In the following section the two selected sets of viewpoints that were used for documenting the RA are described. Subsequently, the FMIS RA using these two viewpoints is described. Finally, the method for deriving an AA from the proposed RA is discussed.

Selection of views

The main purpose of the FMIS is to assist in the management of the daily operations on the farm in the short term, but it should also determine the long-term vision for agricultural production (Cojocaru et al. 2014 ). To do so, the RA should be able to handle all features that have been derived from the previous study ( Appendix B ). The RA should be available for farmers in all different sectors of farming and be flexible to be customised for different sectors. For example, the needs for a dairy FMIS are of course different than the demands for an arable FMIS and thus will have different architecture decompositions.

For modelling the RA, the V&B approach from Clements et al. ( 2010 ) was adopted; this approach consists of a predefined set of viewpoints. The viewpoints can be applied for a broad set of applications and stakeholder concerns. The V&B approach describes more than 17 viewpoints, but as indicated in the V&B approach, only those viewpoints that are of interest are selected for modelling the architecture of a particular system. Two viewpoints were selected for modelling the FMIS RA that are of interest to almost any domain and also for the FMIS domain. These two considered viewpoints are the context viewpoint and the decomposition viewpoint. The context view of a system defines the scope of the system within the external environment. It depicts the entities that are outside the scope of the system but which have a direct relation with the system. Hence, it provides a particular systems’ view in which the corresponding system is located in the overall scope from which it is part. The decomposition view describes the key elements of which the system is composed. It shows how the system is decomposed into multiple modules and shows the sub-modules of these modules with a parent–child relationship.

Context view

The context view of a system is represented using a so-called context diagram. This diagram represents the overall purpose of the system and its interfaces with an external environment (Kim et al. 2003 ). It shows the system boundaries, its environment and the entities it interacts with (Choubey 2011 ). It reveals what is in and what is out of the system and is often the first ingredient of architectural information for a reader. The context diagram for the FMIS is presented in Fig.  6 . The external entities are based on the stakeholders from the “ Background ” section. Each farm owner has one or multiple farms that each have a farm manager. Furthermore, the automated user, the plug-in developer, and other FMISs were added as external entities. The automated user is described as a sensor (e.g., soil moisture sensor) or a robot (e.g., milk robot) that communicates with the sensor and provides data to the system. The plug-in developer is a developer from another company that delivers external software modules that can extend the system. The other FMIS entity presents an FMIS from another developer that interacts with the system.

figure 6

The reference context diagram. This diagram shows the relation of the system to external entities. It shows the optional and required relations and entities. Only the interactions considered the most important (coming from the stakeholder concerns from Table 1 ) are shown

The RA context diagram needs to map the variability in the external entities with the system. Not all external entities are always applicable to a certain FMIS, and certain external entities are not applicable to a specific sector of the agricultural domain. Tekinerdogan and Sözer ( 2012 ) presented methods for showing the variability in different views. Hereby, the mandatory elements are drawn with solid lines, while the optional elements are drawn using dotted lines. This approach was also applied in modelling the context view of the RA for FMIS, as shown in Fig.  6 . The elements, (the modules as well as the relations) are based on the findings from the earlier SLR.

Decomposition view

As stated before, the decomposition view defines the decomposition of the system over different modules. Hereby, the only relation that is used is the decomposition relation which is usually shown by embedding a module in the overall system module. The view thus shows the decomposition of larger modules into smaller modules. This view often is the basis for project organization and system documentation (Van Vliet 1993 ). The decomposition view can help to extract user requirements because it gives a list of things you are supposed to ask when designing a software architecture. The decomposition view for the RA is presented in Fig.  7 . This view shows all the possible modules for FMISs, and all modules are optional. The definition of the modules was a designer’s issue based on the features from the previous SLR (available in Appendix B ). The modules from the decomposition view should cover all features. The AA will consist of a selection of these modules divided over multiple subsystems.

figure 7

The reference decomposition view of the FMIS. This view shows all the possible modules for FMISs, all modules are optional

Method for deriving an application architecture

As discussed at the beginning of the section and presented in Fig.  4 , a specific view from the RA is reused to derive the corresponding view of the AA. The method for this derivation is presented in Fig.  8 . First the requirements of the application are identified. Based on these requirements, features from the family feature model are selected. Based on the selected features, the required FMIS modules for the AA are selected from the FMIS RA. If the required module can be found in the RA, then this will be reused; if not, a new module will be added to the AA. In case the module is found in the RA, it is checked as to whether this module can be reused as-is, or if any adaptation is needed. A completely reusable module exists if the names of the modules are the same, or if the modules are interchangeable [e.g., financial management, vs. economic management (Yan-e 2012 )]. If the module is not reusable as-is, and change is thus needed, the module can be composed or decomposed. In a composition, multiple modules from the RA are combined into one module in the AA. An example of a composition could be that the data transfer and data storage module are combined into a data management module (Murakami et al. 2013 ). In a decomposition, the module from the RA is split into numerous smaller modules in the AA. An example of decomposition could be that the livestock management module should be split up into a cow quality management, and a cow welfare support module (Berger and Hovav 2013 ). In the last process, the AA is validated to ensure a high-quality AA.

figure 8

Method for the derivation of the AA from the RA

The application of two views for deriving the AA views was shown. In principle, the same process could be applied for other possible views (e.g., uses view, layered view etc.). For this, it is required to identify the required views based on the specific application requirements of the selected system (Clements et al. 2010 ). Once the required views are selected the approach discussed above will be the same; that is, first, the reference views will be developed after which the application views are derived.

Evaluation of the reference architecture: a multi-case study approach

The primary objective in this section was to evaluate the RA based on its practical use and effectiveness with retrospective and prospective case studies. This evaluation was done by developing an AA, with the method described in the “ Reference architecture ” section. For the evaluation of the RA, the case study approach was used following the guidelines from Runeson and Höst ( 2009 ). With the multi-case study, it was tried to validate and improve the proposed RA. The following five steps were followed for the case study: (1) case study design, (2) preparation for data collection, collecting evidence, (3) analysis of collected data, and (4) reporting. These five steps are further discussed below.

The primary purpose of this case study was to validate and improve the RA. Table 3 presents the design activities for the three selected case studies with as the primary goal the evaluation of the RA. The goal was the same for all case studies. The case studies aimed to answer the fourth research question of the study and identify the clearness and effectiveness of the RA. Data collection has been carried out by semi-structured interviews with end-user of FMISs (farmer), project managers, and FMIS developers. The data was analysed with a qualitative analysis of identified shortcomings of the RA and with a review of the determined differences of the FMISs with the RA.

Selected case studies and design

In this section, first, the selection of the case studies is described, and subsequently, the objective and planning of the case study are defined. An essential concept for the design of a case study is triangulation. Triangulation means taking multiple angles towards the subject of study (Runeson and Höst 2009 ). For this study, it was made sure there was data triangulation (having various cases) and observer triangulation (multiple companies/observers).

It was decided to use three different case studies. The first case study was conducted in collaboration with a farmer that uses an FMIS in practice. The second case study was performed in cooperation with a company that currently develops an FMIS, and the third case study was performed based on a commercially available FMIS.

For the first case study, a farmer was selected who uses a widely used FMIS for dairy farming in the Netherlands. The company that delivers this FMIS has solutions for the dairy, pig-husbandry, poultry and arable domain. The FMIS provider has more than 100 employees and delivers its product internationally. The selected dairy farmer is located in the north of the Netherlands and milks around 120 cows.

The second case is a prospective case performed in cooperation with an agricultural company active on the global market. This company is currently developing an FMIS for livestock farming, which makes this case a prospective case for an FMIS under development.

A third case study was performed with the head of technology from a commercial FMIS developer. The company 365FarmNet was chosen that was founded in 2013 and has over 80 employees. 365FarmNet presents a holistic software-based service (SaaS) that aims at the support of farmers in all aspects of farm management on their entire holding (365FarmNet 2020a ). This FMIS is available as a web service, and it has an application for mobile phones and tablets. There is a basic free version, and add-ons from partner companies can be bought (365FarmNet 2020b ). This FMIS was selected because it is considered as one of the most innovative systems currently available on the market.

Preparation for data collection

In this section, the procedures and protocols for the data collection are described. Data were collected with the help of semi-structured interviews. In the semi-structured interview, all questions were planned, but they were not necessarily asked in the same order as listed. The development of the interview decided in which order the questions were handled (Hove and Anda 2005 ). The list of the questions is available in Appendix D ; for these questions, a distinguishing was made in questions for the farmer, the FMIS under development and 365FarmNet. The interviews were organised as follows and are based on Runeson and Höst ( 2009 ):

The objectives of the interview and the case study were presented. It was also explained how the data from the interview would be used.

A set of introductory questions was asked about the background of the FMIS, which were relatively simple to answer.

The main interview questions were asked, which took up the largest part of the interview. First, the RA was evaluated, and the architectural design of their application (AA) was made with the help of the RA. For each presented view, the modelling technique and the logic behind the view were explained.

Using the methods presented in the “ Reference architecture ” section, the context diagram and decomposition view were derived. For the decomposition view, it was first decided in which sub-systems the application view should be divided. This division could, for example, be a division into an arable, dairy and data sub-system. Afterward, for each module from one of the case studies, it was checked if a module from the reference decomposition view could be re-used. This paper only shows the diagrams for the first case study for reasons regarding the length of the paper. Diagrams for the two other cases were also successfully made, but not presented in this paper.

Case study 1: farmer

The AA presented in this section is based on the specific use scenario of the farmer. The farmer mainly used the FMIS for registration purposes and thought the ability to print administration for himself and the government is most important. According to the perception of the farmer, communication with other organizations and integration with the feed computer in the barn could be improved, for example with extra modules. These modules are not presented in the AA derived since those modules are not used by the farmer and therefore the AA is limited. With the help of the overview of all modules in the RA the obstacle”FMIS not complete” (see Appendix C ) could be solved in this case.

Context diagram

Figure  9 presents the context diagram for the case of the farmer. In comparison with Fig.  6 , this diagram has fewer external entities. Multiple entities were not applicable to a dairy farmer, and others did not have a relationship with the FMIS in the case of the farmer. Furthermore, the input supplier was re-named to the feed supplier, and the customer was decomposed into the milk buyer and cattle dealer.

figure 9

The context diagram for the FMIS of the farmer. The diagram is based on the entities that interact with the system in the case of the farmer

Figure  10 presents the decomposition view of the case of the farmer. The modules were selected from Fig.  7 . Multiple elements could be copied one-to-one. The reporting module was decomposed in weekly and yearly reporting. The main functionality of the system is in the herd, reproductivity, and data management. Therefore, three sub-level decomposition views are presented in which these three modules are decomposed into sixteen smaller modules. In this view, there are no dependencies between the modules shown. However, for example, in practice, the heat management module can use the decision support module to predict fertility.

figure 10

The Decomposition view for the FMIS of the farmer. Only the components of the system that the farmer used are presented

Case study 2: FMIS under development

With the help of the provided method for deriving an AA from the RA, a context diagram, and a decomposition view were made for the FMIS under development. The agricultural company mainly focuses on making an FMIS with basic features from which reports can be obtained. These reports can be examined and, based on this, the agricultural company can provide the farmer with advice. For the context diagram, the farm owner can have one or multiple farms, and each farm has its manager who is responsible for the daily operational management. In the context diagram, more interactions with the system were added, which were wanted by the agricultural company. The FMIS would consist of two main modules: herd management and feed management. Furthermore, some modules were made optional: sensor management, HR management, traceability management and financial management. These modules could be added later to the system or were optional for the farmer to choose from. A decomposition was needed for the herd, feed, and financial management to split them into smaller modules. A composition was needed for data management, which is composed of the modules data transfer, storage, acquisition and processing. For the development of the new FMIS, the obstacles listed in Appendix C should be taken into account.

Case study 3: 365FarmNet

The investigated FMIS was delivered as a platform and attached great importance to the input from add-on suppliers who provided extra modules to the platform. The main focus of the system was on documentation and supporting the user with decision making. Due to its platform structure, the system can overcome multiple obstacles listed in Appendix C , especially the system integration is better due to this structure. A context diagram and a decomposition view could be derived with the help of the RA and the methodology presented. For the context diagram, multiple entities were kept optional since these entities can be granted access to the system, but not necessarily have access to the system. Some names of entities needed to change to make the context diagram follow the company vocabulary. The decomposition view was split up into three different sub-level decomposition views: company, crop, and dairy. The company sub-level contained modules like human resource and stock management (decomposed from resource management), while the crop and dairy management sub-levels contained more production-related modules. Also, government linkage and add-on management modules were added to the decomposition sub-levels, since these were considered of key importance for the FMIS.

Overall results and findings

For all three case studies, the AA could be quickly derived from the RA. The answers to the semi-structured interviews and the overall discussion with the stakeholders showed that the method substantially simplified and speeded up the design of AAs. All stakeholders of the three cases indicated that they were not familiar with the feature diagram as well as the architecture view modelling approach. Hence, first this needed explanation but this did not take much effort in all of the cases. The stakeholders indicated the need for a more precise architecture modelling indeed and considered the approach both as practical and useful. The separation of the architecture views was considered useful since the current architecture diagrams that were used were usually described in informal, single diagrams, including the different concerns. The RA appeared to be understandable and practical for making the possible modules explicit. Compared to the required modules for the AA in the cases, the RA also appeared to be complete since no specific detailed models were needed to be added to the AA. The context view was considered very practical since it made the interaction with external entities explicit. While drawing the context diagram, an often made mistake was the need to define the interactions among the external entities themselves. The decomposition view was found simple and expressive and actually was somehow used in all the three cases, although not in a formal manner. Overall, the reuse approach, as provided by the method, was highly appreciated. Further, it was indicated that not only the RA by itself but also the overall process of deriving an AA from a feature diagram was helpful because it helped to discuss the design decisions in such an explicit manner. The stakeholders indicated that they would further use the method. An issue that was asked was whether other views existed, and this would be considered in future projects.

This study presents a novel RA based on well-established architecture modelling approaches for FMISs using input from a previous literature review (Tummers et al. 2019 ). Thereby this study paves the way for similar studies on FMISs. From the results, several interesting observations could be identified.

With the help of the SLR from the previous study, four RAs were identified. Two of the four did not focus on the FMIS, but on the farm software ecosystem or farm management system, which have a far more comprehensive scope than FMIS (Kruize et al. 2016 ; Kaloxylos et al. 2014 ; Kassahun et al. 2016 ). Two others did not follow the ISO/ISEC/IEEE architecture standard (López-Riquelme et al. 2017 ; Nikkilä et al. 2010 ). Moreover, three out of the four RAs are not named as such in the identified literature; only Kruize et al. ( 2016 ) named the proposed architecture an RA. These observations might indicate that the field of (reference) architecture is a relatively new subject for researchers of FMISs.

Based on the observations in the previous paragraph, it was decided that there was a need to design a new RA (in alignment with Fig.  3 ). In the RA, only a set of two views could be proposed (given in the “ Reference architecture ” section) based on the literature input. If grey literature (e.g., software specification documents, website information, etc.) would have been utilised or expert interviews, there could have been another input for the requirements. This difference in the requirements could have led to a different selection of RA views or more views than the context diagram and decomposition view.

To derive the requirements for the context and decomposition view, the architecture design process was applied, which was proceeded by the requirement analysis. The stakeholders were identified and their requirements derived. The input for the reference context diagram was based on the stakeholders and their concerns coming from the literature. A disadvantage of a context diagram is that it only shows a limited amount of the interactions of the entities with the system. More interactions could be possible than the ones that are shown in the diagram. It was tried to mitigate this risk by only showing the essential interactions from the stakeholder concerns and by making multiple interactions optional.

The second view was the decomposition view. Although it was tried to keep this view as generic as possible, it is difficult to ensure that every necessary component is captured. The selection of components of the different views was a trade-off between the level of flexibility and level of re-use. It was tried to include as many components as possible by making most of them optional. It can however always be the case that some elements for a particular application are missing. It was tried to mitigate this risk by presenting the methods for the derivation of an AA in Fig.  8 . These methods, however, allow for much change in the RA, which could make use of the RA less generic again.

The designed RA was used for deriving AAs with the help of a multi-case study approach. With the help of the derivation methods from Fig.  8 , the AA could be mapped. Compositions and decompositions were needed, and some new modules were needed. This indicates that the RA, in combination with the provided methodology, was complete enough to map the case-studies. What was seen from the case studies is that FMISs in practise only focus on a (little) part of the RA, which is domain-dependent. For the multi-case study, there is always the threat of misinterpretation of different concepts. To mitigate this threat, the interpretation of the questions was verified with the interviewed persons. When the views from the case study were made, these were again verified by the interviewed person to validate their correctness. From the retrospective and prospective case studies, it was identified that the RA and the methodology to derive an AA could be used to evaluate existing FMISs and guide the design of new FMISs.

It is shown that the methods defined by the software architecture design community can be used in the agri-food domain. It is also believed that the results of the study can be applied in a broader context than FMISs only. With the RA and the methods to derive AAs, future research in RA can be evaluated against well-established architecture design approaches. It is believed that in comparison with other RAs for FMISs, the RA is more generic and can be applied in more domains of the agricultural sector.

In the 1980s and the beginning of the 1990s, new software systems known as enterprise resource planning (ERP) systems revolutionised many large businesses. Companies such as SAP provided off the shelf solutions, which were tailored and implemented based on the company’s requirements (Rashid et al. 2002 ). It is believed that such a revolution is also possible for FMISs. When off the shelf modules can be picked from an RA and can be combined into one FMIS based on the needs of the farmer, the concept of the FMIS can experience significant growth.

The RA presented in this study uses a different approach than other RAs for FMISs. The main objective was to present a new RA for FMIS based on the well-established software architecture design practices following the current software architecture standard (ISO/IEC/IEEE 42010 2011 ). Relating RAs for FMISs did not seem to follow this standard or proposed an RA for the agricultural domain with a scope broader than the FMIS alone. From the study, it becomes evident that the notion of architecture design and knowledge of modelling information systems in previous literature regarding FMISs can be considered weak.

Based on the inputs from the SLR in the previous study, the objective could be fulfilled, and an RA could be presented based on two architectural reference views: The context diagram and the decomposition view. With a multi-case study approach, FMIS AAs can be derived from the RA and thereby validate the RA and the methodology to derive AAs. The methods described in this study are generic and can be used for both developing a new RA and for enhancing the current RA with other views. The methods for the derivation of an RA are universal and can be used for the derivation of all possible views. To further improve the RA, it can be validated by performing more case studies with the methods presented in this study.

The presented RA is generic and can be used for FMISs in all sectors of the agricultural domain to overcome the FMIS obstacles and stakeholder concerns. The genericity is due to the flexibility of the system, which is based on the optionality of the components. With the help of the presented RA, researchers can identify the key research directions and practitioners can benefit from the results of this study by a thorough knowledge of the FMIS architecture. Different FMISs can be compared and new FMISs built.

In future work, the method will be applied in other industrial case studies. Also, a broader set of viewpoints and related agricultural domains will be considered in which the methods can be applied.

365FarmNet (2020a). About us. Retrieved May 14, 2020 from https://www.365farmnet.com/en/company/about-us/ .

365FarmNet (2020b). Cooperation and integration. Retrieved May 14, 2020 from https://www.365farmnet.com/en/365partner/ .

Allen, J., & Wolfert, J. (2011). Farming for the future: Towards better information-based decision-making and communication-Phase I: Australasian stocktake of farm management tools used by farmers and rural professionals. New Zealand Centre of Excellence in Farm Business Management, Palm. Technical Report, AgFirst Consultancy/Wageningen University and Research Centre.

Ampatzidis, Y., Tan, L., Haley, R., & Whiting, M. D. (2016). Cloud-based harvest management information system for hand-harvested specialty crops. Computers and Electronics in Agriculture, 122 , 161–167.

Article   Google Scholar  

Barmpounakis, S., Kaloxylos, A., Groumas, A., Katsikas, L., Sarris, V., Dimtsa, K., et al. (2015). Management and control applications in agriculture domain via a future internet business-to-business platform. Information Processing in Agriculture, 2 (1), 51–63.

Bass, L., Clements, P., & Kazman, R. (2003). Software architecture in practice . Boston, USA: Addison Wesley Professional.

Google Scholar  

Berger, R., & Hovav, A. (2013). Using a dairy management information system to facilitate precision agriculture: The case of the AfiMilk® system. Information Systems Management, 30 (1), 21–34.

Bligaard, J. (2014). Mark online, a Full Scale GIS-based Danish farm management information system. International Journal on Food System Dynamics, 5 (4), 190–195.

Bojan, V.-C., Raducu, I.-G., Pop, F., Mocanu, M., & Cristea, V. (2015). Cloud-based service for time series analysis and visualisation in farm management system. In 2015 IEEE International Conference on Intelligent Computer Communication and Processing (ICCP) , pp. 425–432. IEEE

Burlacu, G., Cojocaru, L.-E., Danila, C., Popescu, D., & Stanescu, A. M. (2013). A digital business ecosystem integrated approach for farm management information system. In 2013 2nd International Conference on Systems and Computer Science (ICSCS) , pp. 80–85. IEEE.

Capterra (2020). Farm management software. Retrieved May 14, 2020, from https://www.capterra.com/farm-management-software/ .

Carli, G., & Canavari, M. (2013). Introducing direct costing and activity based costing in a farm management system: A conceptual model. Procedia Technology, 8 , 397–405.

Chen, P.-J., Du, Y.-C., Cheng, K.-A., & Po, C. Y. (2016). Development of a management system with RFID and QR code for matching and breeding in Taiwan pig farm. In 2016 13th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON) , pp. 1–5. IEEE.

Choubey, M. K. (2011). IT infrastructure and management (for the GBTU and MMTU) . Delhi, India: Pearson Education India.

Clements, P., Garlan, D., Little, R., Nord, R., Stafford, J., Bachmann, F., et al. (2010). Documenting software architectures: Views and beyond (2nd ed.). Boston, USA: Addison-Wesley Professional.

Cojocaru, L.-E., Burlacu, G., Popescu, D., & Stanescu, A. M. (2014). Farm Management Information System as ontological level in a digital business ecosystem. Service orientation in Holonic and multi-agent manufacturing and robotics (pp. 295–309). Cham, Switzerland: Springer.

Chapter   Google Scholar  

FIWARE community (2020). FIWARE: The open source platform for our smart digital future. Retrieved May 14, 2020 from https://www.fiware.org/ .

Fountas, S., Carli, G., Sørensen, C., Tsiropoulos, Z., Cavalaris, C., Vatsanidou, A., et al. (2015a). Farm management information systems: Current situation and future perspectives. Computers and Electronics in Agriculture, 115 , 40–50.

Fountas, S., Sorensen, C. G., Tsiropoulos, Z., Cavalaris, C., Liakos, V., & Gemtos, T. (2015b). Farm machinery management information system. Computers and Electronics in Agriculture, 110 , 131–138.

Hewage, P., Anderson, M., & Fang, H. (2017). An Agile Farm Management Information System Framework for Precision Agriculture. In Proceedings of the 9th international conference on information management and engineering , pp. 75–80. ACM, New York, USA: Association for Computing Machinery.

Honda, K., Ines, A. V. M., Yui, A., Witayangkurn, A., Chinnachodteeranun, R., & Teeravech, K. (2014). Agriculture information service built on geospatial data infrastructure and crop modelling. In Proceedings of the 2014 international workshop on web intelligence and smart sensing , pp. 1–9. ACM, New York, USA: Association for Computing Machinery.

Hove, S. E., & Anda, B. (2005). Experiences from conducting semi-structured interviews in empirical software engineering research. In Proceedings - international software metrics symposium , pp. 10–23. Picataway, USA: IEEE.

Husemann, C., & Novković, N. (2014). Farm management information systems: A case study on a German multifunctional farm. Economics of Agriculture, 61 (2), 441–453.

ISO/IEC/IEEE 42010 (2011). Systems and software engineering–architecture description. Technical Report, ISO/IEC/IEEE 42010.

International office for standardization (ISO) (2017). ISO 11783-1:2017 Tractors and machinery for agriculture and forestry—serial control and communications data network—Part 1: General standard for mobile data communication. Retrieved May 14, 2020, from https://www.iso.org/standard/57556.html .

Jiang, R., & Zhang, Y. (2013). Research of agricultural information service platform based on internet of things. In 2013 12th International Symposium on Distributed Computing and Applications to Business, Engineering & Science , pp. 176–180. IEEE.

Kaloxylos, A., Eigenmann, R., Teye, F., Politopoulou, Z., Wolfert, S., Shrank, C., et al. (2012). Farm management systems and the Future Internet era. Computers and Electronics in Agriculture, 89 , 130–144.

Kaloxylos, A., Groumas, A., Sarris, V., Katsikas, L., Magdalinos, P., Antoniou, E., et al. (2014). A cloud-based farm management system: Architecture and implementation. Computers and Electronics in Agriculture, 100 , 168–179.

Kang, K. C., Cohen, S. G., Hess, J. A., Novak, W. E., Peterson, A. S. (1990). Feature-Oriented Domain Analysis (FODA) Feasibility Study. Technical Report, Carnegie-Mellon University, Pittsburgh, PA, USA: Software Engineering Inst.

Kassahun, A. (2017). Aligning business processes and IT of multiple collaborating organisations . PhD Thesis, Wageningen University, The Netherlands.

Kassahun, A., Hartog, R. J. M., & Tekinerdogan, B. (2016). Realizing chain-wide transparency in meat supply chains based on global standards and a reference architecture. Computers and Electronics in Agriculture, 123 , 275–291.

Khaydarov, Z., Laine, T. H., Gaiani, S., Choi, J., & Lee, C. (2012). Context-aware agriculture organizer. In Proceedings of the 6th international conference on ubiquitous information management and communication, ICUIMC’12 , p. 69. New York, USA: Association for Computing Machinery.

Kim, C. H., Weston, R. H., Hodgson, A., & Lee, K. H. (2003). The complementary use of IDEF and UML modelling approaches. Computers in Industry, 50 (1), 35–56.

Kitchenham, B., Pearl Brereton, O., Budgen, D., Turner, M., Bailey, J., & Linkman, S. (2009). Systematic literature reviews in software engineering—a systematic literature review. Information and Software Technology, 51 (1), 7–15.

Kjær, K. E. (2008). Designing middleware for context awareness in agriculture. In Proceedings of the 5th middleware doctoral symposium , pp. 19–24. ACM, New York, USA: Association for Computing Machinery.

Köksal, O., & Tekinerdogan, B. (2019). Architecture design approach for IoT-based farm management information systems. Precision Agriculture, 20 (5), 926–958.

Kruize, J. W., Wolfert, J., Scholten, H., Verdouw, C. N., Kassahun, A., & Beulens, A. J. (2016). A reference architecture for Farm Software Ecosystems. Computers and Electronics in Agriculture, 125 , 12–28.

Li, M., Qian, J. P., Yang, X. T., Sun, C. H., & Ji, Z. T. (2010). A PDA-based record-keeping and decision-support system for traceability in cucumber production. Computers and Electronics in Agriculture, 70 (1), 69–77.

López-Riquelme, J. A., Pavón-Pulido, N., Navarro-Hellín, H., Soto-Valles, F., & Torres-Sánchez, R. (2017). A software architecture based on FIWARE cloud for Precision Agriculture. Agricultural Water Management, 183 , 123–135.

Magne, M. A., Cerf, M., & Ingrand, S. (2010). A conceptual model of farmers’ informational activity: A tool for improved support of livestock farming management. Animal, 4 (6), 842–852.

Article   CAS   PubMed   Google Scholar  

Muller, G. (2012). A reference architecture primer , White paper. Eindhoven, The Netherlands: Eindhoven University of Technology.

Murakami, Y., Utomo, S. K. T., Hosono, K., Umezawa, T., & Osawa, N. (2013). IFarm: Development of cloudbased system of cultivation management for precision agriculture. In 2013 IEEE 2nd Global Conference on Consumer Electronics (GCCE) , pp. 233–234. IEEE.

Nikkilä, R., Seilonen, I., & Koskinen, K. (2010). Software architecture for farm management information systems in precision agriculture. Computers and Electronics in Agriculture, 70 (2), 328–336.

Open Group. (2012). ArchiMate 2.0 specification: Open Group Standard . ’s Hertogenbosch, The Netherlands: Van Haren Publishing.

Paraforos, D. S., Vassiliadis, V., Kortenbruck, D., Stamkopoulos, K., Ziogas, V., Sapounas, et al. (2016). A Farm Management Information System using Future Internet technologies. International Federation of Automatic Control (IFAC)-PapersOnLine, 49 (16), 324–329.

Paraforos, D. S., Vassiliadis, V., Kortenbruck, D., Stamkopoulos, K., Ziogas, V., Sapounas, A. A., et al. (2017). Multi-level automation of farm management information systems. Computers and Electronics in Agriculture, 142 , 504–514.

Project Management Institute. (2013). A guide to the project management body of knowledge (PMBOK ® guide) . Newtown Square, USA: Project Management Institute.

Rashid, M. A., Hossain, L., & Patrick, J. D. (2002). The evolution of ERP systems: A historical perspective. Enterprise resource planning: Solutions and management (pp. 35–50). Hershey, USA: IGI Global.

Robbemond, R., & Kruize, J. W. (2011). Data standards used for data-exchange of FMIS (44). Technical Report, Wageningen, The Netherlands: Wageningen Economic Research.

Runeson, P., & Höst, M. (2009). Guidelines for conducting and reporting case study research in software engineering. Empirical Software Engineering, 14 (2), 131–164. https://doi.org/10.1007/s10664-008-9102-8 .

Sørensen, C. G., Fountas, S., Nash, E., Pesonen, L., Bochtis, D., Pedersen, S. M., et al. (2010). Conceptual model of a future farm management information system. Computers and Electronics in Agriculture, 72 (1), 37–47.

Sørensen, C. G., Pesonen, L., Bochtis, D. D., Vougioukas, S. G., & Suomi, P. (2011). Functional requirements for a future farm management information system. Computers and Electronics in Agriculture, 76 (2), 266–276.

Stafford, J. V. (2000). Implementing precision agriculture in the 21st century. Journal of Agricultural Engineering Research, 76 (3), 267–275.

Tekinerdogan, B., & Köksal, O. (2018). Pattern based integration of internet of things systems. Lecture notes in computer science (including subseries lecture notes in artificial intelligence and lecture notes in bioinformatics) (pp. 19–33). Cham, Switzerland: Springer.

Tekinerdogan, B., & Sözer, H. (2012). Variability viewpoint for introducing variability in software architecture viewpoints. In Proceedings of the working international conference on software architecture (WICSA) & European conference on software architecture (ESCA) 2012 Companion Volume, pp. 163–166. ACM, New York, USA: Association for Computing Machinery.

Tekinerdogan, B., & Uzun, B. (2019). Design of variable big data architectures for E-Government Domain. Software engineering for variability intensive systems: Foundations and applications (pp. 251–274). Cleveland, USA: Chemical Rubber Company Press.

Tsiropoulos, Z., & Fountas, S. (2015). Farm management information system for fruit orchards. In Stafford, J. V. (Ed.) Precision Agriculture’15, Proceedings of the 10th European Conference on Precision Agriculture , pp. 44–55. Wageningen, The Netherlands: Wageningen Academic Publishers.

Tsiropoulos, Z., Fountas, S., Gemtos, T., Gravalos, I., & Paraforos, D. (2013). Management information system for spatial analysis of tractor-implement draft forces. In Stafford, J. V. (Ed.) Precision Agriculture’13, Proceedings of the 9th European Conference on Precision Agriculture , pp. 349–356. Wageningen, The Netherlands: Wageningen Academic Publishers.

Tsiropoulos, Z., Carli, G., Pignatti, E., & Fountas, S. (2017). Future perspectives of Farm Management Information Systems. Precision agriculture: Technology and economic perspectives (pp. 181–200). Cham, Switzerland: Springer.

Tummers, J., Kassahun, A., & Tekinerdogan, B. (2019). Obstacles and features of Farm Management Information Systems: A systematic literature review. Computers and Electronics in Agriculture, 157 , 189–204.

US Dept. of Defence/Office of the DoD CIO (2010). Reference architecture description. Technical Report June, US Dept. of Defence/Office of the DoD CIO. Retrieved May 14, 2020, from https://dodcio.defense.gov/Portals/0/Documents/DIEA/Ref_Archi_Description_Final_v1_18Jun10.pdf .

Van Vliet, H. (1993). Software engineering: Principles and practice (Vol. 3). New York, USA: Wiley.

Voulodimos, A. S., Patrikakis, C. Z., Sideridis, A. B., Ntafis, V. A., & Xylouri, E. M. (2010). A complete farm management system based on animal identification using RFID technology. Computers and Electronics in Agriculture, 70 (2), 380–388.

Waston, H., Caroll, A., & Mann, R. (1991). Information Systems for Management: A book of readings . Homewood, USA: Richard D Irwin.

Yan-e, D. (2012). Research about based-SOA agriculture management information system. In 2012 International Conference on Information and Automation (ICIA) , pp. 78–82. IEEE.

Yu, L., & Yongjun, L. (2010). A research and practice for sugarcane area’s farm management information service platform. In 2010 International Conference on Computer Application and System Modeling (ICCASM) , (vol. 9, pp. V9–218). IEEE.

Zheleva, M., Bogdanov, P., Zois, D.-S., Xiong, W., Chandra, R., & Kimball, M. (2017). Smallholder agriculture in the information age: Limits and opportunities. In LIMITS 2017—Proceedings of the 2017 workshop on computing within limits , pp. 59–70, New York, USA: Association for Computing Machinery.

Download references

Author information

Authors and affiliations.

Information Technology Group, Wageningen University & Research, Hollandseweg 1, 6706 KN, Wageningen, The Netherlands

J. Tummers, A. Kassahun & B. Tekinerdogan

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to B. Tekinerdogan .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix A: Included SLR studies

See Table 4 .

Appendix B: Features

See Table 5 .

Appendix C: Obstacles

See Table 6 .

Appendix D: Semi-structured interview questions

Introductory questions.

What is your experience with FMISs?

For which purposes do you use your FMIS?

What do you find most important about your FMIS?

Which feature of the FMIS do you use the most?

How easy do you find your FMIS to use?

Which problems do you encounter with the use of the FMIS?

What is your FMIS missing?

FMIS under development

How would you describe an FMIS?

What is the main reason for developing an FMIS?

What do you consider most important for an FMIS?

Where are most of your customers situated?

In which agricultural sectors are you currently active and to which sectors do you want to expand?

What do you think the most used feature of the FMIS will be?

How important do you think the add-ons are for your system?

What is the most used feature of the FMIS?

Main questions.

Farmer, FMIS under development & 365FarmNet

Do you miss any relationship with external entities in the context diagram?

Do you think some relationships in the context diagram are unnecessary?

Make the context diagram based on methods in the “ Reference architecture ” section.

Do you think some modules of the decomposition view are unnecessary?

Make the decomposition view based on methods in the “ Current reference architectures ” section

Are you missing an architectural view?

How do you think this RA can be improved?

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Tummers, J., Kassahun, A. & Tekinerdogan, B. Reference architecture design for farm management information systems: a multi-case study approach. Precision Agric 22 , 22–50 (2021). https://doi.org/10.1007/s11119-020-09728-0

Download citation

Published : 01 June 2020

Issue Date : February 2021

DOI : https://doi.org/10.1007/s11119-020-09728-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Farm management information system
  • Multi-case study research
  • Find a journal
  • Publish with us
  • Track your research

One Team, One Vision

It's the right talent that enables financial services firms to drive performance. We pair senior industry practitioners with accomplished consultants to give clients unmatched service quality and value.

  • Our Approach

End-to-End Services, Working Together

We deliver a full spectrum of interrelated Strategy, Technology, Data, and Risk services across the entire engagement lifecycle.

  • Strategy & Management
  • Risk & Regulatory Compliance
  • Digital & Technology
  • Data & Analytics

Industry Focus

Case studies, join a driven & innovative team.

We offer a unique opportunity to join an established boutique consulting firm with an entrepreneurial environment.

  • Why Join RP
  • Create Your Own Path
  • Diversity, Equity & Inclusion
  • Early Career Opportunities
  • Open Positions

Get in Touch

Whether you are seeking guidance in financial services, or searching for your next opportunity, we're always here.

  • Media Center
  • Privacy Policy

Could not find case study ‘enterprise-data-management-framework’. Please check the URL and try again.

We approach every engagement with our client in mind—creating solutions that are tailored to fit the task at hand. Our interrelated services offerings address a wide range of client needs. Whether a company is looking to solve a problem or capitalize on an opportunity, we can help them to achieve and exceed their goals—effectively and efficiently.

See Services See More Cases

reference data management case study

EU AI Act: first regulation on artificial intelligence

The use of artificial intelligence in the EU will be regulated by the AI Act, the world’s first comprehensive AI law. Find out how it will protect you.

A man faces a computer generated figure with programming language in the background

As part of its digital strategy , the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits , such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.

In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation. Once approved, these will be the world’s first rules on AI.

Learn more about what artificial intelligence is and how it is used

What Parliament wants in AI legislation

Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.

Parliament also wants to establish a technology-neutral, uniform definition for AI that could be applied to future AI systems.

Learn more about Parliament’s work on AI and its vision for AI’s future

AI Act: different rules for different risk levels

The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence. While many AI systems pose minimal risk, they need to be assessed.

Unacceptable risk

Unacceptable risk AI systems are systems considered a threat to people and will be banned. They include:

  • Cognitive behavioural manipulation of people or specific vulnerable groups: for example voice-activated toys that encourage dangerous behaviour in children
  • Social scoring: classifying people based on behaviour, socio-economic status or personal characteristics
  • Biometric identification and categorisation of people
  • Real-time and remote biometric identification systems, such as facial recognition

Some exceptions may be allowed for law enforcement purposes. “Real-time” remote biometric identification systems will be allowed in a limited number of serious cases, while “post” remote biometric identification systems, where identification occurs after a significant delay, will be allowed to prosecute serious crimes and only after court approval.

AI systems that negatively affect safety or fundamental rights will be considered high risk and will be divided into two categories:

1) AI systems that are used in products falling under the EU’s product safety legislation . This includes toys, aviation, cars, medical devices and lifts.

2) AI systems falling into specific areas that will have to be registered in an EU database:

  • Management and operation of critical infrastructure
  • Education and vocational training
  • Employment, worker management and access to self-employment
  • Access to and enjoyment of essential private services and public services and benefits
  • Law enforcement
  • Migration, asylum and border control management
  • Assistance in legal interpretation and application of the law.

All high-risk AI systems will be assessed before being put on the market and also throughout their lifecycle.

General purpose and generative AI

Generative AI, like ChatGPT, would have to comply with transparency requirements:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training

High-impact general-purpose AI models that might pose systemic risk, such as the more advanced AI model GPT-4, would have to undergo thorough evaluations and any serious incidents would have to be reported to the European Commission.

Limited risk

Limited risk AI systems should comply with minimal transparency requirements that would allow users to make informed decisions. After interacting with the applications, the user can then decide whether they want to continue using it. Users should be made aware when they are interacting with AI. This includes AI systems that generate or manipulate image, audio or video content, for example deepfakes.

On December 9 2023, Parliament reached a provisional agreement with the Council on the AI act . The agreed text will now have to be formally adopted by both Parliament and Council to become EU law. Before all MEPs have their say on the agreement, Parliament’s internal market and civil liberties committees will vote on it.

More on the EU’s digital measures

  • Cryptocurrency dangers and the benefits of EU legislation
  • Fighting cybercrime: new EU cybersecurity laws explained
  • Boosting data sharing in the EU: what are the benefits?
  • EU Digital Markets Act and Digital Services Act
  • Five ways the European Parliament wants to protect online gamers
  • Artificial Intelligence Act

Related articles

Digital transformation in the eu, share this article on:.

  • Sign up for mail updates
  • PDF version

This section features overview and background articles for the general public. Press releases and materials for news media are available in the news section .

IMAGES

  1. (PDF) Making the Case for Research Data Management

    reference data management case study

  2. Case Study

    reference data management case study

  3. Equifax Master Data Management Case Study Free Case Study

    reference data management case study

  4. Reference Data Management: The Case for a Utility Model

    reference data management case study

  5. Reference Data Management: What it is + Importance

    reference data management case study

  6. Two Case Studies on Managing Data by Software AG

    reference data management case study

VIDEO

  1. #Recruiters #dataanalysis #data

  2. Methods of data & information collection

  3. Strategic Management Case Study

  4. Data Analytics in Agile Project Management

  5. Research Methodology

  6. A Sneak Peak into RDM

COMMENTS

  1. 7 Master Data Management Use Cases in 2024

    Figure 1. Worldwide interest in master data management since 2004. 1 Master data management (MDM) is the process of collecting, storing, organizing, and maintaining a company's critical data. This data can include information about customers, products, suppliers, financial data, and compliance data.

  2. Reference Point

    Solution Reference Point deployed a team of experienced data experts and analysts to work closely with the Client's Product and Technology leaders and various business stakeholders to define the strategic vision, requirements, and phased roadmap for the customer MDM solution.

  3. Data Architects' Role in Reference Data Management

    Facilitating Data Integration: Solutions for integrating reference data within the broader data ecosystem, supporting interoperability and data exchange. Case Studies: Successful RDM with Data ...

  4. Reference Point

    Case Study Data Management Capability Assessment Model Data Client U.S. Bank Cooperative Services Data Management & Governance Project verview The Client's senior leadership sought a framework to provide an industry benchmark for the maturity of the bank's Data Management Program.

  5. Manage Reference Data with Master Data Management

    Reference data management standardizes common data attributes and records, so you can… CENTRALLY manage and store reference data in an MDM hub ELIMINATE operational costs of storing data in multiple systems STREAMLINE data updates from a centralized platform Easily PUBLISH changes to source systems and downstream applications

  6. The Ultimate Guide to Reference Data Management in Healthcare

    An effective reference data management system: Enables the mapping of complex data relationships for use by healthcare providers and patients. Connects one data system with another for interoperability. Automates certain data inputs to design new codes and code sets. Compares inputs from disparate systems.

  7. Reference Data Management

    Aug 1, 2014 • 1 like • 1,191 views Axis Technology, LLC Axis Technology, LLC Technology Case Study 1 of 2 What's hot (20) Chief Data Officer (CDO) Organization Roles The Importance of MDM - Eternal Management of the Data Mind Ebook - The Guide to Master Data Management 10 Worst Practices in Master Data Management

  8. Case Study: Enhanced Master Data Management Implemented ...

    Case Study: Enhanced Master Data Management Implemented at Office Depot. By Amber Lee Dennis on August 31, 2017. "The way customers are shopping has changed and continues to evolve," said Sam Copeland, Vice President of Merchandising Operations for Office Depot, in a recent interview with DATAVERSITY®. To meet this new challenge, he said ...

  9. What Is Reference Data Management?

    15 minute read Big Data Moves Fast. Don't Wait. We work with large organisations and businesses to unlock their potential through Data, Analytics and AI. Get Started Discover the basics of Reference Data Management and why it's essential for businesses. Get the answers you need now!

  10. PDF Building the Business Case for Master Data Management (MDM)

    An effective business case for master data management (MDM) communicates the problem of fragmented and contradictory data in terms that are relevant to business decision-makers. Much like master data itself, an MDM business case is ideally complete, reliable, and timely.

  11. The Reference Data Utility: How Goldman Sachs, JPMorgan Chase ...

    The Reference Data Utility: How Goldman Sachs, JPMorgan Chase & Co and Morgan Stanley are breaking the reference data mold. The Reference Data Utility (RDU) built by SmartStream and backed by Goldman Sachs, JPMorgan Chase & Co, and Morgan Stanley is up and running and ready to deliver reference data management services to the banks.

  12. DataEd Slides: Essential Reference and Master Data Management

    Data tends to pile up and can be rendered unusable or obsolete without careful maintenance processes. Reference and Master Data Management (MDM) has been a popular Data Management approach to effectively gain mastery over not just the data but the supporting architecture for processing it. This webinar presents MDM as a strategic approach to ...

  13. Reference Data Management Platform by Amurta

    APPROACH 01 Establish a Central Reference Data Unit Establishing a Reference Data Unit will help oversee data management across the organization. It is important to use this for data standardization, data quality, and operational goals to increase efficiency. 02 Manage External Data

  14. Unveiling the Distinctions Reference and Master Data Management

    Reference Data Management involves the management of data elements that provide context or categorization to other data. It represents the static values used for classification, validation, and control purposes. Examples of reference data include country codes, currency codes, product categories, industry codes, etc. RDM focuses on maintaining ...

  15. Reference Data Management

    Alveo efficiently processes reference data for a range of use cases including; investment operations, pre-trade research, risk, compliance, valuation, index management and external reporting. Our unique structured approach to securing high-quality market data and validated price information has proven to be highly effective for many of the ...

  16. Case Study

    The Reference Data Challenge was a call to action for app developers to help improve the way NIST shares scientific reference data. Scientists and engineers need data—from the atomic weight of carbon and the structure of benzene to the most precise value for the speed of light. High quality physical and chemical reference data help ...

  17. PDF Reference Data Usage Management

    Summary The client manages a wide range of financial assets. The firm wanted more clarity and control over its usage of reference data. Many different systems and environments were sending data requests to vendors.

  18. Case: Data Management Assessment and Tool Selection

    Could not find case study 'data-management-assessment-and-tool-selection'. Please check the URL and try again. End-to-End Services, Working Together. We approach every engagement with our client in mind—creating solutions that are tailored to fit the task at hand. Our interrelated services offerings address a wide range of client needs ...

  19. Strategies for Master Data Management: A Case Study of an International

    Strategies for Master Data Management: A Case Study of an International Hearing Healthcare Company Published: 03 October 2022 Volume 25 , pages 1903-1923, ( 2023 ) Cite this article Download PDF Information Systems Frontiers Aims and scope Submit manuscript Anders Haug, Aleksandra Magdalena Staskiewicz & Lars Hvam 891 Accesses Explore all metrics

  20. Reference Data Management Software Solutions for Investment ...

    There are many reference data management software solutions targeted to the specific needs of the investment services industry. However, these solutions vary considerably in terms of their data coverage and range of data management functionality.

  21. Case Study Methodology of Qualitative Research: Key Attributes and

    A case study is one of the most commonly used methodologies of social research. This article attempts to look into the various dimensions of a case study research strategy, the different epistemological strands which determine the particular case study type and approach adopted in the field, discusses the factors which can enhance the effectiveness of a case study research, and the debate ...

  22. Reference architecture design for farm management ...

    With the multi-case study, it was tried to validate and improve the proposed RA. The following five steps were followed for the case study: (1) case study design, (2) preparation for data collection, collecting evidence, (3) analysis of collected data, and (4) reporting. These five steps are further discussed below.

  23. Reference Point

    Reference Point - Case: Manage Data Assets with the Right Enterprise Strategy and Oversight Could not find case study 'enterprise-data-management-framework'. Please check the URL and try again. End-to-End Services, Working Together We approach every engagement with our client in mind—creating solutions that are tailored to fit the task at hand.

  24. EU AI Act: first regulation on artificial intelligence

    As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better conditions for the development and use of this innovative technology. AI can create many benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing; and cheaper and more sustainable energy.. In April 2021, the European Commission proposed the first EU ...