Optimization model for perishable products with time-base demand,stand on Pareto boundary and uncertainty

Optimizing the perishable food, supply chain, taking into account all aspects, such as location, time, inventory control and routing, has become one of the top research priorities in today's competitive market. The primary objective is to reduce the overallcost, and the second one is to diminish the probability of warehouses failing to maintain the product (storage waste), which in turn, maximizes product quality.In this research, the issue of integrated supply chainplanning has been investigated in which it is assumed that there is a central warehouse for food distribution. The model developed in the present study considers two contradictory objectives for supply chainoptimization.In this research, foods are classified into three categories with the rate of deterioration common, high and very high. The concept of Pareto boundary and a set of optimal responses have been used. Demand is defined as uncertain and dependent on two factors, the demand for the prior period and the remaining time until the expiration of the products. The research problem is solved with both NSGA-II and NRGA algorithms whose parameters of these two algorithms are optimized by the Taguchi method. The two efficient boundaries are created by the four criteria MID, DM, SM and NOS. Finally, concluded that the efficiency of the NSGA-II method in solving the research problem and forming an effective boundary is higher than the NRGA method.


Introduction
Most end-consumerproducts are delivered through a multilevel supply chain (SC).In which datainformation, raw materials, and parts are transferred from suppliers to manufacturers to create the final products.Then in different markets as well as by different distribution systems, it finally reaches the end consumers.Supply chain management (SCM) is a critical role in reducing costs and maximizing profit, the importance of which has been repeatedly emphasized in many studies. In general, the decisions that a chain management makes are divided into three categories[1]:  Strategic decisions: With long-term impacts that generally include decisions on location and allocation.  Tactical decisions:With medium-term impacts that generally include inventory and transportation decisions.  Operational decisions: With routine effects, on workstation and route planning. Before the emergence of the SC concept, each of the above decisions was taken individually; for each of these issues, modeling was performed and optimal responses to the decision variables were made [2]. However, with the development of the SC concept, a combination of the above-mentioned decision-making problems was put on the agenda [3].For example, the inventory-locating problem is one of the well-known issues in the SC debate where strategic and tactical decisions are made concurrently. Furthermore, the problem of inventory-routing and locating-routing are long-familiar issues that have received much attention in the research area [4].
In general, it can be said that solving SC problems individually leads to suboptimal answers, which is not necessarily the best answer for optimizing the entire SC. if the strategic, tactical, and operational decisions are modeled and taken separately; the optimal responses provided will not be the optimum response. As such, developing a model that covers all three of these decisions and provides an overall optimal response to all three levels of decision-making is needed, and this issue has received less attention in previous research [5].
The planning and optimization of the SC of perishable food, have also been morecomplex.The first reason is that the demand for these products is completely dependent on the quality, and the quality is relying on the expiration date of the products.Therefore, it is important to develop suitable models for optimizing location, inventory control and routing in these chains [6].On the other hand, in the perishable food SC, there are usually thousands of miles from farm to end consumer. These chains must be managed so that the best quality products reach their destination, given the time constraints and real-world uncertainties. A large number of processes in the SC for such products, and the need to comply with high standards, such as In the present study, the issue of integratedSC planning has been investigated in which it is assumed that there is a central warehouse for food distribution. Foods are classified into three categories with the rate of deterioration common, high and very high.Products with the common rate of deteriorationare kept in a normal warehouse. High-deterioration rate products are stored in the refrigerator, and a veryhigh rate of deterioration will be kept in the freezer. Potential locations for wholesale warehouses are also identified, from which appropriate locations should be selected; the level of inventory available to each warehouse as well as the allocation of warehouses to customers. The research problem is defined as a two-objective model. The primary goal is to decrease the whole cost, and the other objective is to limit the chance of warehouses failing to store the product (storage waste), which in flip, maximizes product quality.Given that the research problem is NLP type and that the above two objectives are completely reversed and need optimal solutions in the form of efficient boundaries, both NSGA-II and NRGA algorithms are used to optimize the research problem. And the performance of these two algorithms is compared. In the following, the research literature will be reviewed.Then a proposed model for this problem and its solution methodology is presented.Finally, the results are discussed and analyzed.

Literature review
Much research has focused on food chain optimization. Broekmeulen (1998) has proposed a MILP-based modeling approach to optimize the SC performance of vegetables and fruits and has investigated the efficiency of the proposed method using chain simulation [9]. Zhang et al. (2003) discuss the optimization of frozen-food quality in the SC. In the model presented by the researchers, maintenance and transportation costs are minimized throughout the chain and the quality of the frozen commodity that has the opposite relation to the corruption rate is maximized [ [10].
Lucas, M., & Chhajed (2004) investigated the optimization of locating agricultural farms using linear programming and presented a model for optimal locating of 6 farms in the real world [11].Ahujaet al.
(2007) Optimized the problem of retailer allocation to wholesalers and rooted the distribution of goods by considering the time constraint for distributing perishable goods using nonlinear scheduling [12].

Ahumada
and Villalobos (2009) and Akkerman&Grunow(2011) have developed a linear programming model to optimize the location and routing problem for perishable products [13,14].
Zanoni et al. (2012) propose a model for SC optimization in which the quality of perishable goods is changed by temperature and shelf life and optimize quality and total cost as two objectives [15].
Nagurney and Masoumi (2012) modeled and optimized the SC as a perishable commodity using linear programming, designing a sustainable type of chain (green chain goals, social responsibility, and cost) [16].
Agustina et al. (2014) have used the complex linear programming problem to optimize routing and inventory control in the perishable food SC. Researchers have designed the issue for two purposes with cost and quality goals, and have considered quality to be influenced by the expiration date of products. Multivariate genetic algorithm is also used to solve the problem [17].
Lanfranchiet al. (2016) have also designed a threeobjective problem, including goals: minimizing delivery time, minimizing cost, and minimizing harmful gas emissions, for perishable products [18].
Mousavi and Amiri (2017) designed a sustainable green SC using complex integer programming and used the NSGA-II algorithm to solve it. Researchers have defined the quality of food as influenced by their novelty and time remaining until the products expire [19].
Rafi Majd et al. (2018) discuss three levels of strategic, tactical, and operational concurrent decision making about perishable products [20].Tavana et al. (2018) have examined the problem of SC of perishable goods with social and humanitarian benefits in precrisis and post-crisis situations. This research problem is solved in a multi-objective manner using the NSGA-II algorithm [21].
Hsu (2019) has used the quality of perishable products as anSC objective and developed a twoobjective SC model using linear programming [22].Liet al. (2019) have developed a three-level SC problem with locating, routing and inventory objectives and have proposed a multi-step heuristic based on Simulated Annealing algorithm for perishable goods [23]. According to the research literature, there is a need for a model that is capable of simultaneously addressing the conflicting multi-objective SC problem (by reducing costs and reducing the number of corrupt goods and increasing quality), as well as, taking into account uncertain demandin perishable products and grading of the degree of commodity corruption, to optimize the chain inventory, dependent on the time remaining from the expiry date of products in the warehouse.In addition, in this paper, to improve the accuracy of the responses, the Pareto boundary concept is used to provide a set of optimal responses.In the meantime, this paper has attempted to solve this model by two algorithms and evaluated their response accuracy, and by the Taguchi method, the structure of these algorithms has been improved.

Research Methodology
At the beginning of this section, the proposed research model is presented and then a description of the proposed model solving methods using both NSGA and NRGA algorithms.

Proposed model
In the following, the parameters are given in table (1).

Solving methodology
To solve the research model, two meta-heuristic NSGA II and NRGA will be used.Both methods are developed from genetic algorithms and are used to solve multi-objective models and produce efficient boundaries.In this section, the general structure of these two algorithms will be introduced.The main reason for using the evolutionary NSGA-II algorithm for multi-objective optimization problems is its ability to deal with the set of feasible solutions and find the optimal Pareto level at each run of the algorithm.The NSGA-IIalgorithm was developed by Deb et al. In 2002 and is a modified version of the original NSGA algorithm, which is capable of faster computation and better convergence to achieve a non-recursive level compared to other evolutionary multi-objective algorithms.In table 3, a comparison of the two algorithms is shown. The steps to find non-recursive solutions by the NSGA II algorithm are as follows:

I. Collecting parameters
Model parameters include: number of objective functions, number of constraints and also the number of independent variables. Specific parameters include:parameters used in the research model regarding ordering, maintenance, demand rate, etc.

II. Selecting the parameters of the Genetic Algorithm (GA)
Adjust the population size N, adjust the number of generations, adjust the probability of crossovers and mutations, and adjust the minimum and maximum variables.

III. Creating the initialpopulation
Solution coding: The information reflected on the chromosome should consist of two parts: 1) the number of products shipped from each type to each warehouse and from each warehouse to each customer in each time period 2) the decision on warehousing and ordering anytype products.

IV. Genetic operators
Crossover operator:crossover is one of the main operators in genetic patterning and needs to be set up correctly. The way it works is that based on the rate of intersection defined in the parameters, chromosomes are selected from the initial population. Then a random number is created between zero and one, and this number occurs between the two selected weighted algebraic chromosomes and the new chromosome.

Mutation operator:
The mutation operator is applied to the list of methods specific to each activity. For the list of methods, each execution method activity is randomly selected by re-selecting one method from among the available methods. In more detail, based on the mutation rate of a number of chromosomes selected from the primary population, it is modified based on a random number of one of the genes and then the mutated chromosome is created by observing the minimum and maximum variables.
Using the above operators, the child population is formed.

V. Population composition
The parent and child populations that were formed in the previous step are combined and form the Archive.

VI. Formation of Pareto surface boundary layers
For all members of the Archive collection, all theobjective functions listed are calculated.
After calculating all the objective functions for the members of the Archive set, each member is given a level number by sorting the members by the undesirable level.

VII. Sort each level by density (congestion distance)
Since individuals are selected based on their surface number and density (congestion distance), all individuals in the population are assigned the density value (density interval). Calculate the density by the following formula. ISSN: 00333077 3068 www.psychologyandeducation.net 18 In this formula, K is the number of target functions and i for each member in the Archive set. and are the maximum and minimum calculated values of the objective function k. A large amount of this parameter will result in better divergence and range in the population set.

VIII. Create the next generation
The next generation of N is made up of the combined population by comparing the number of Pareto surface layers and the density value.Accordingly, if the two members selected were from two different levels, a member with a lower-level number (the solutions at the lower-level number are better and non-dominant solutions) will be selected.Now if both members have the same level number, then the one with the greatest congestion distance is selected. This will increase the dispersion of the answers.
The above steps will continue until the condition is terminated.
The termination situation is defined as follows:  Failure to enter a new solution to the effective boundary formed after several specific steps.  Set the maximum number of algorithm iterations and finish the algorithm after the number of iterations.
Another algorithm used in this research is the NRGA algorithm. The difference between this algorithm and the NSGA-II algorithm is in the use of the Roulette cycle. This algorithm uses the Roulette cycle to apply the intersection operator. With the Roulette cycle, the better answers are more likely to be chosen, not necessarily selected.

IX. Parameter setting
The proper design of parameters and operators has a great impact on the performance of the algorithms used. One of the methods used is Taguchi designs.Two of the main goals of the Taguchi designs are to minimize the effect of perturbation factors and to determine the level of controllable factors.In this method, after the design of the experiment, the results of each experiment are analyzed.Taguchi suggests analyzing mean responses. The amount of changes can also be analyzed using Signal to Noise ratios (SN).Between ratios, three ratios are considered as standard ratios.Now, since most of the objective functions of the scheduling problem are the least desirable, so the ratio is calculated using the following formula: 19 Population number (npop), crossover probability (Pc), probability of mutation (Pm) and maximum iteration; four controllable factors for the proposed algorithm. There are three levels for each of these factors. Table (4) shows the agents and their levels. Therefore, according to the Taguchi method, 9 designs are selected for four factors with three levels (Table 5). Now by performing the above-designed experiments and repeating each of them for 3 times the mean values and disturbance alerts are calculated.The values obtained are shown in the following diagrams.Finally, by examining the Signal to Noise diagrams (SNs) and the mean ( ) level of each factor will be determined.Given the minimization of the objective function, it would be better if the Signal to Noise ratios and mean values were lower. Finally, a lower level will be selected.

Methodological performance evaluation
There are two categories of criteria for evaluating the performance of multi-objective meta-heuristic algorithms: I.
Qualitative criteria: Among the qualitative criteria can be mentioned the following: Named Quality Metric (QM), Mean ideal distance(MID), Data envelopment analysis (DEA) and Number of non-dominated solutions (NOS). II.
Quantitative Criteria: Criteria for this category include the Diversification metric (DM), Spacing metric (SM). In this study, four criteria of MID, DM, SM, and NOS-Pareto were used to evaluate two algorithms and compare their performance.
Mean ideal distance (MID): This measure calculates the proximity and distance of Pareto solutions to the ideal point. Therefore, better Pareto responses are obtained when the value of this criterion is lower.And the lower and lower the criterion, the Pareto answers are closer to the ideal answers. This criterion is calculated by the following relation: 20 Here, n is the number of non-dominated answers, and are ideal points. Diversification metric (DM): Calculates this algorithm for Diversification non-dominated solutions. The higher this criterion indicates the more divergence in nondominated solutions. This criterion is calculated by the following relation: 21 Spacing metric (SM): This criterion calculates the relative distance of successive solutions. When the answers are evenly aligned, then the SM value will also decrease. Therefore, an algorithm with non-dominated solutions with a small amount of spacing is more desirable.

22
In this formula is the Euclidean distance between points and is the mean of these distances.
A number of non-dominated solutions-Pareto(NOS-Pareto): This criterion determines the number of optimal solutions that each algorithm succeeds in finding.A higher value of this criterion indicates better algorithm performance.
In order to evaluate and prove the model performance, as well as to compare the two methods mentioned, two SC scenarios were simulated, in which 1 central distribution warehouse, and 10 wholesale warehouses, were randomly selected from 100 potential warehouses. And each of them will be randomly assigned, 100 customers. By running two algorithms separately, for each of the simulated SCs in both scenarios, an attempt is made to investigate the performance of both NSGA-II and NRGA algorithms.

Research results
To investigate the efficiency of the proposed research model, in this section, two SCs are simulated and the performance of both NSGA-II and NRGA algorithms is investigated.The simulated SC consists of 1 central warehouse for distribution of products, 10 wholesale warehouses comprised of 100 potential locations and 100 customers.
In the initial section, it is necessary to optimize the performance of the two proposed algorithms using the Taguchi method. This is shown in Fig. 1 and Fig. 2   According to Figures 1 and 2  But as is evident in the above figures, neither of the two methods has been completely superior to the other. Therefore, one of them cannot be selected as the best performing algorithm. For this purpose, the performance of the two algorithms will be compared using the four criteria developed in the previous section and according to the table 6.  It is seen that for the first scenario, NSGA-II performs better and for the second scenario, it performs almost equal to NRGA. Therefore, it can be said that overall, the ability of the NSGA-II algorithm to solve the research problem is evaluated higher.

Conclusion
The model developed in the present study considers two contradictory objectives for SC optimization.As the cost of building and maintaining warehouses increases, the rate of product failure and the likelihood of warehouse failure will decrease.But the costs of the chain also increase.Therefore, it is not possible to provide an optimal response in which both targets are at the same time as their optimal value.In this study, the concept of Pareto boundary and a set of optimal responses have been used.Also, demand is defined as uncertain and dependent on two factors, the demand for the prior period and the remaining time until the expiration of the products.The research problem is solved with both NSGA-II and NRGA algorithms whose parameters of these two algorithms are optimized by the Taguchi method.Each algorithm presents an efficient boundary for each of the two research problems. The two efficient boundaries created by the four criteria ISSN: 00333077 3074 www.psychologyandeducation.net MID, DM, SM, and NOS were compared and finally concluded that the efficiency of the NSGA-II method in solving the research problem and forming the effective boundary is higher than the NRGA method.