Agent Simulation Model Based on Similarity Measure

Agent Simulation Model Based on Similarity Measure

Hongchen Wei Xiaobin Li

Bao Ji University of Arts Of Sciences, BaoJi, Shanxi Province, 721016, China

Corresponding Author Email: 
happypix @163.com, 343955450@qq.com
Page: 
657-672
|
DOI: 
https://doi.org/10.18280/ama_b.600403
Received: 
15 March 2017
| |
Accepted: 
15 April 2017
| | Citation

OPEN ACCESS

Abstract: 

Under the situation of scarcity of data in the target domain, the performance of the traditional agent simulation model tends to decrease. In this scenario, the useful knowledge in the source domain is extracted to guide the target domain learning to obtain more appropriate class information and agent simulation performance is an effective learning strategy. Based on the similarity measure, this paper proposes a Biased Agent Model (TIM) algorithm, which is similar to the source and target domain data distribution by introducing a biased learning mechanism (IM) algorithm to improve the simulation performance of the intelligent simulation (IM) algorithm in the data scarcity scenario. In order to ensure the validity of the bias, the TIM improves the performance of IM by considering the statistical and geometric characteristics of the source and target domains, the message passing mechanism in the algorithm makes it possible to achieve the goal of assisting the target domain learning. In addition, the factor graph of TIM can also show that the IM can be similar to IM in the case of lack of data in the target domain. The simulation results of the simulated data set and the real data set show that the proposed algorithm is more efficient than the classical IM algorithm in dealing with the non-sufficient data agent simulation task with better performance.

Keywords: 

Biased learning, Statistical feature, Geometric structure, Similarity measure, Agent simulation method

1. Introduction
2. Bias Similarity Measure (TIM) Agent Simulation Model
3. Experimental Study
4. Conclusion
  References

[1] Frey BJ and Dueck D. Clustering by passing messages between data points. Science, 2007, 315:972-976. [doi: 10.1126/science.1136800]

[2] Kschischang FR, Frey BJ, and Loeliger HA. Factor grIMhs and the sum-product algorithm. IEEE Trans. Inf. Theory, 2001, 47(2): 498 -519. [ doi: 10.1109/18.910572]

[3] McQueen JB. Some Methods for classification and Analysis of Multivariate Observations. In: Proc. of 5th Berkeley Symposium on Mathematical Statistics and Probability, Berkeley, University of California Press, 1967.281-297.

[4] Tsymbal A, Pechenizkiy M, Cunningham P, Puuronen S. Dynamic integration of classifiers for handling concept drift. Information Fusion, 2008, 9(1): 56-68. [doi: 10.1016/j.inffus.2006.11.002]

[5] Dueck D, Frey BJ, Jojic N, Jojic V, Giaever G, Emili A, Musso G, Hegele R. Constructing treatment portfolios using affinity propagation. In: Proc. of 12th Annual International Conf. on Research in Computational Molecular Biology, 2008, 360-371. [doi: 10.1007/978-3-540-78839-3—31]

[6] Jain AK. Data clustering: 50 years beyond K-means. Pattern Recognition Letters, 2010, 31(8): 651-666. [doi: 10.1016/j.patrec.2009.09.011]

[7] Sumedha ML and Weigt M. Unsupervised and semi-supervised clustering by message passing: soft-constraint affinity propagation. The European Physical Journal B, 2008, 125-135. [doi: 10.1140/epjb/e2008-00381-8]

[8] Xiao J, Wang J, Tan P, etc. Joint affinity propagation for multiple view segmentation. In: Proc. of the 11th IEEE Int. Conf. on Computer Vision, 2007, 1-7. [doi: 10.1109/ICCV.2007.4408928]

[9] Strehl A and Ghosh J. Cluster ensembles---knowledge reuse framework for combining multiple partitions. The Journal of Machine Learning Research, 2003, 583-617. [doi: 10.1162/153244303321897735]

[10] Aggarwal CC, Han J, Wang J, and Yu P. A Framework for Clustering Evolving Data Streams. In: Proc. of the 29th VLDB Conf., 2003, 81-92.

[11] Shao L, Zhu F, Li X. Transfer Learning for Visual Categorization: A Survey. IEEE Transactions on Neural Networks and Learning Syatems, 2015, 26(5):1019-1034. [doi: 10.1109/TNNLS.2014.2330900]

[12]  Zhuang FZ, He Q, Shi ZZ. Survey on transfer learning research. Journal of Software, 2015, 26(1):26-39. [doi: 10.13328/j.cnki.jos.004631]

[13] Deng ZH, Jiang YZ, Choi K.-S., Chung F.-L., Wang ST. Knowledge-Leverage-Based TSK Fuzzy System Modeling. IEEE Transactions on Neural Networks and Learning Systems, 2013, 24(8).1200」1212. [doi: 10.1109/TNNLS.2013.2253617]

[14] Deng ZH, Jiang YZ, Cao LB, Wang ST. Knowledge-Leverage-Based TSK Fuzzy System with improved knowledge transfer, In: Proc. of the 2014 IEEE Int. Conf. on Fuzzy System, 2014, 178-185. [doi: 10.1109/FUZZ-IEEE.2014.6891544]

[15]  Tommasi T, Orabona F, CIMuto B. Learning categories from few examples with multi model knowledge transfer. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2014, 36(5):928-942. [doi: 10.1109/TPAMI.2013.197]

[16]  Dai W, Yang Q, Xue G, and Yu Y. Boosting for transfer learning. In: Proc. of Int. Conf. Machine Learning (ICML), 2007, 193-200. [doi: 10.1145/1273496.1273521]

[17] Raina R, Battle A, Lee H, etc. Self-taught learning: transfer learning from unlabeled data. In: Proc. of the 24th Int. conf. Machine learning, 2007, 759-766. [doi: 10.1145/1273496.1273592]

[18]  Xue G R, Dai W, Yang Q, Yong Y. Topic-bridged PLSA for cross-domain text classification. In: Proc. of the 31st annual Int. ACM SIGIR conf. Research and development in information retrieval, 2008, 627-634. [doi: 10.1145/1390334.1390441]

[19]  Glorot X, Bordes A, Bengio Y. Domain adIMtation for large-scale sentiment classification: A deep learning IMproach. In: Proc. of the 28th Int. Conf. on Machine Learning (ICML), 2011, 513-520.

[20]  PIMadimitriou CH and Steiglitz K. Combinatorial Optimization: Algorithms and Complexity. Dover Publications, 1998.