The value of “connected farming” in making life easier for farmers and helping them to reduce the environmental impact of their practices is well established. Another of its advantages is beginning to emerge: by producing large quantities of quality data close to research standards, it helps bring agronomic and zootechnical research closer to farmers’ needs, and could better inform public agricultural policies.
Digital agriculture is a concept that is beginning to become more democratic, as recently demonstrated by the importance of this topic at the Paris Agricultural Show. In the gloomy atmosphere generated by the feeling of “agri-bashing” felt by many farmers, it was one of the few topics that gave a positive and attractive image of recent developments in agriculture.
Digital farming tools were first designed to make life easier for farmers (GPS guidance, herd monitoring sensors), and to help them optimize their farming practices in an environmental sense (connected weather stations, crop models used to optimize the use of inputs). They have also helped strengthen ties with consumers, who, thanks to the development of traceability and social networks, can now put a face and the name of a producer behind the food they buy.
More discreetly, connected agriculture is also beginning to have a new beneficial consequence, which could in the future play an even more positive role for the agricultural world: bringing farmers closer to the world of research, and thus to the policy makers who draw inspiration from it.
Quand la science doit se faire directement à la ferme
This is already a reality in some areas of R&D in digital agriculture. Let’s take the example of herd monitoring sensors: they were initially developed to detect unusual and clearly identified events, such as the detection of heat or calving. These first applications were developed in a traditional research framework, moving from the lab to the field: the algorithms for detecting these events were developed during tests on experimental farms of research or technical institutes, then validated in a small number of farms, before being launched commercially.
Now that these initial applications are well developed, research is now focusing on analyses of the daily behaviour and welfare of the animals: for example, measuring the time spent standing or lying down, or feeding and rumination times. On these subjects, it is necessary to detect more subtle changes in the “daily life” of the animals, compared to their ordinary activity. This makes it very difficult to develop such algorithms on experimental farms, where the usual activities of production animals (moving to milking, going out to pasture, etc.) are frequently disrupted by experiments that change their usual behaviour and generate movements or restraints that would not occur on commercial farms. This type of work must therefore be carried out directly in the field, with experiments in controlled conditions being used only as spot checks in a few minority situations. This is an inversion of the classic relationship between scientific experimentation and field data.
From the lab to the vineyard… and back!
The use of mechanistic crop models in decision-support tools is another example of bringing research closer to farmers’ concerns. These models, derived from agronomic research, are increasingly used for yield forecasting and the management of the resulting inputs (irrigation, fertilization). Epidemiological models of the same type are also used to predict the appearance of diseases or pests threatening crops, and thus to position pesticide treatments as accurately as possible.
Due to their design, which is the result of long research work in ecophysiology, these models are sufficiently robust and predictive to lend themselves to plausible simulations on the potential effect of changing practices for agro-ecological purposes, or adapting to climate change. They also have the advantage of objectively quantifying the environmental conditions to which crops are exposed. Evapotranspiration is a classic example of a simple indicator for measuring the water demand of crops, which can then be used as a reference to check whether the irrigation practised by the farmer has indeed avoided any waste of water. However, it remains a relatively basic indicator, which is only relevant in the simplest cases: those where the aim is simply to preserve the yield potential by avoiding any water deficit in the crop. For certain productions, the issues of irrigation are more complex, because a slight water deficit, if well controlled, improves the quality of the production: the most well-known case is that of the vine, where the ideal management method, defined by the specifications of the wine appellations, aims to create a moderate water deficit during the ripening of the grapes, more or less pronounced according to the type of wine that one wants to produce. In this case, irrigation management requires models that are much more complex than a simple calculation of evapotranspiration, as they rely on climatic data, but also on the characteristics of the soil and the volume of vegetation in the vineyard. Initially, this is again a top-down approach to research, from the laboratory to the field. However, the use of these models on farms then provides valuable feedback that will bring the theoretical work closer to the practical work of farmers or their advisors.
The Vintel software, developed by the company itk in partnership with (among others) INRA and CIRAD, is a good example of such round trips between the lab and the field. Intended to optimize precision irrigation on vines, it is based on a model resulting from research work, based on a classic indicator of hydric stress in research, the basic leaf potential. This indicator is the most reliable for measuring the hydrous state of a vine plant, but its measurement is tedious, which limits its use in vineyards: it must be measured at dawn, with a specific instrument called a pressure chamber. Some viticultural consultants, particularly in California, use pressure chambers to advise winegrowers. However, they tend to take these measurements at midday for convenience, but also to better understand the water deficit state of the plot at the time of day when it is at its highest . This measure is much less practiced in research, and it was therefore originally impossible to develop a mechanistic model to simulate it. Vintel therefore initially came out with a model estimating only the basic leaf potential. A few years of using this first version, with consultants who were adept at estimating the midday leaf potential, then made it possible to develop a second model for the midday leaf potential, combining meteorological data and indicators from the basic potential, without going back to the laboratory .
This example is very demonstrative of the new complementarity between research and digital tools for farmers: it is indeed data from the field that made it possible to develop a model of the midday leaf potential, in line with the habits of wine-growing technicians. But they would not have been sufficient on their own to develop a reliable statistical model: only their combination with indicators from a mechanistic model resulting from the research made it possible to develop a model robust enough to be entrusted to winegrowers and consultants.
« Medium data » vs Big Data
A few years ago, the explosion of “Big Data” technologies and their introduction into the agricultural world gave rise to a somewhat binary vision opposing two scientific approaches:
- On the one hand, the classical approach of agronomic or zootechnical research, based on high quality but relatively few experiments, to develop predictive models that can be used in decision support, and based on the human expertise of researchers,
- On the other hand, the new “Big Data” approaches based on data, applying automatic learning techniques (machine learning, deep learning) on the massive data from new sensors deployed in agriculture (combine yield sensors, data collected by milking robots)
The craze for Big Data was based on the assumption that deep learning would allow the development of reliable predictive models, despite the “noise” contained in the information of the masses of data collected, which exceeds what human expertise is capable of analyzing. In fact, this hope quickly ran up against the major pitfall of machine learning techniques: their opacity for users… both end users (farmers or breeders) and service designers! Of course, machine learning now makes it possible to define apparently satisfactory decision rules or models from any sufficiently large data set. But, because they do not know the “reasoning” behind these models, even their designers are unable to predict the extent to which these rules or models are exploitable in new contexts: a rather distressing uncertainty when one wishes to develop new agricultural services, beyond the region where they were initially validated, or in new climatic contexts.
In addition to its sensitivity to unpredictable climatic hazards, agriculture has another unfortunate feature for machine learning: the actual data that can be accumulated in the field are far from representing all the possible combinations of cropping techniques. The technical itineraries practised by farmers are influenced by their habits, their experience and the expertise of their advisers, and are therefore totally implicitly limited by human reasoning. The situation is therefore completely different from fields such as automatic learning applied to games like chess or go: in the latter case, the algorithm is capable, based on the rules of the game, of testing all possible and imaginable combinations on its own, even those that a human expert would not think of. In agriculture, artificial intelligence is constrained by the fact that the available data are the consequence of human reasoning, which prevents it from finding original “solutions” to invent new practices.
The result of these constraints is that purely data-based approaches have difficulty making a decisive breakthrough in agricultural decision support. The future will undoubtedly lie, as we have seen from the example of Vintel, in the combination of data-based approaches and mechanistic models that allow human expertise to be integrated into Artificial Intelligence. This new vision, hybrid AI, has moreover been retained as one of the major themes of ANITI, the new Artificial Intelligence Institute being set up in Toulouse … and agriculture has been identified as one of its priority fields of application .
This close interweaving of scientific expertise and data from farms has an obvious corollary: the need to bridge the gap between Big Data and research data. This is the mission of what can be called “Medium Data”: well-qualified data from farms, or at least from plots conducted in conditions close to those of farmers. Until now, this role of producing intermediate data has been entirely devolved to the experiments of agricultural development organizations: technical institutes, chambers of agriculture, cooperatives . Digital agriculture will enable the emergence of a new category of “medium data”: data of a quality close to research standards, but disseminated in hundreds or thousands of farms.
Between the scientific data from research, which are of high quality but few in number, and the “Big Data” of sensors on agricultural equipment, connected agriculture allows the emergence of a “Medium Data”: data of quality close to that of research, acquired on farms, and not experimental plots that are not very representative. It is this data continuum that will make it possible to feed hybrid artificial intelligence (a combination of machine learning and human mechanistic expertise), one of the most promising avenues of current AI.
Indicators to inform public agricultural policies
We have seen, on the example of irrigation, that agricultural decision-making tools lend themselves well to the creation of objective indicators of crop needs: the same approach is perfectly transposable to fertilization and crop protection. Epidemiological models, which are already used to recommend optimal treatment dates for diseases and pests, could also be used to quantify at the plot level the currently vague and subjective notion of “disease pressure”. Such indicators would be valuable for improving the monitoring of Ecophyto, the pesticide use reduction plan launched in 2010 following the Grenelle Environment Round Table. The least that can be said is that, nearly 10 years after its inception, the plan is far from the 50% reduction target (“if possible”) that had been assigned to it: pesticide consumption shows no significant change in national average. Even more disturbing is the fact that even the pilot farms of the scheme, the Dephy network, are very far from the expected objective. Faced with what can hardly be described as anything other than a failure, the French Academy of Agriculture recently made recommendations to improve the management of the Ecophyto Plan , including the creation of such indicators of sanitary pressure on crops. Digital agriculture could also play a major role in another of the Academy’s proposals: the annualization of surveys on cropping practices, the only references that can be used to finely calculate farmers’ pesticide consumption. Indeed, the current indicator of the Ecophyto plan, the NODU, does not lend itself to an agronomic interpretation, which would make it possible to calculate the potential reduction in pesticide use at the farm level. Another indicator, the IFT, would allow this calculation, but it is currently only calculated every three years, due to the cost of the surveys currently required to collect the data. This is despite the fact that parcel management software allows this indicator to be calculated automatically for those farmers who are equipped with it. A representative network of farms equipped with this software would therefore make it possible to calculate the LFIs practiced every year at a lower cost, and to cross-reference them with the health pressure indicators mentioned above. It should thus be possible to better interpret the monitoring of the Ecophyto plan… and probably to reassign more realistic objectives to it, differentiated by crop and by region!
Participatory science, which draws on the knowledge of its future users and stakeholders from civil society, is one of the strong trends in current research. INRA is also heavily involved in this field . However, much participatory science work remains very asymmetrical: it is often the researchers who are the only actors in the synthesis carried out on the basis of the informal and poorly organised knowledge of the stakeholders mobilised for the project. Connected agriculture offers a unique opportunity for farmers to take ownership of the research subjects that concern them, by producing by themselves data that is as intelligible to them as it is to the researchers who will use it. Beyond its effects on the daily work of farmers, it therefore has great potential to bring research closer to their needs, and to enable politicians to better understand their practices. It is on this condition that agriculture will be able to meet society’s many expectations of it.