Improving Pricing Intelligence by Multi-Modal Deep Learning Method

Main Article Content

Ninad Madhab, Alina Dash

Abstract

Deep networks are successfully implemented when there is data of single modalities (e.g. Texts, Images etc) but when it comes to pricing comparison both data can be helpful for information gathering. Pricing intelligence involves of analysing pricing data, tracking, monitoring and for understanding the market and making educated change in pricing at various scale and speed. Often changes in product pricing has led retailers to continually monitor their relative price position and incorporate changes within an active strategy. The correct insights into which viable products are selling at a finest price, retailers can act in instantaneously with similar offers and discounts that get consumers enthusiastic or the idea about making the switch from a competitor. Retailers want to use automatic pricing intelligence, matching with a competitive approach. In this paper, we optimize and improvise the method of pricing intelligence by developing a deep learning model which considers both the products image and text. Therefore, we have used a novel method a shared classification layer to generate hierarchical universal embeddings, a multi-modal deep-learning method by which we have generated embeddings that comprises of both the product's text and image representation which can help in further downstream tasks for classification and product retrieval. It learns the semantic information along with cross-modal representation. A shared hidden layer is learnt by the model in which the distance between any two universal embeddings is like the distance between their corresponding class embeddings in the semantic embedding space. It also uses a classification objective with a shared classification layer to make sure that the image and text embeddings are in the same shared latent space.

Article Details

Section
Articles