{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2022,11,1]],"date-time":"2022-11-01T04:23:24Z","timestamp":1667276604708},"reference-count":27,"publisher":"Institute of Electronics, Information and Communications Engineers (IEICE)","issue":"7","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["IEICE Trans. Fundamentals"],"published-print":{"date-parts":[[2020,7,1]]},"DOI":"10.1587\/transfun.2019eap1134","type":"journal-article","created":{"date-parts":[[2020,6,30]],"date-time":"2020-06-30T22:11:19Z","timestamp":1593555079000},"page":"928-936","source":"Crossref","is-referenced-by-count":2,"title":["Magic Line: An Integrated Method for Fast Parts Counting and Orientation Recognition Using Industrial Vision Systems"],"prefix":"10.1587","volume":"E103.A","author":[{"given":"Qiaochu","family":"ZHAO","sequence":"first","affiliation":[{"name":"Graduate School of Information Science and Technology, Osaka University"}]},{"given":"Ittetsu","family":"TANIGUCHI","sequence":"additional","affiliation":[{"name":"Graduate School of Information Science and Technology, Osaka University"}]},{"given":"Makoto","family":"NAKAMURA","sequence":"additional","affiliation":[{"name":"Laboratory of Hi-Think Corporation"}]},{"given":"Takao","family":"ONOYE","sequence":"additional","affiliation":[{"name":"Graduate School of Information Science and Technology, Osaka University"}]}],"member":"532","reference":[{"key":"1","unstructured":"[1] M.W. Spong, S. Hutchinson, M. Vidyasagar, et al., Robot Modeling and Control, John Wiley & Sons, 2006."},{"key":"2","unstructured":"[2] S.K. Nayar, \u201cRobotic vision system,\u201d U.S. Patent US4611292A, Jan, 1990."},{"key":"3","doi-asserted-by":"publisher","unstructured":"[3] W.A. Perkins, \u201cA model-based vision system for industrial parts,\u201d IEEE Trans. Comput., vol.C-27, no.2, pp.126-143, 1978. 10.1109\/tc.1978.1675046","DOI":"10.1109\/TC.1978.1675046"},{"key":"4","doi-asserted-by":"publisher","unstructured":"[4] Y.-R. Chen, K. Chao, and M.S. Kim, \u201cMachine vision technology for agricultural applications,\u201d Comput. Electron. Agric., vol.36, no.2-3, pp.173-191, 2002. 10.1016\/s0168-1699(02)00100-x","DOI":"10.1016\/S0168-1699(02)00100-X"},{"key":"5","doi-asserted-by":"publisher","unstructured":"[5] B. \u00c5strand and A.-J. Baerveldt, \u201cAn agricultural mobile robot with vision-based perception for mechanical weed control,\u201d Auton. Robot., vol.13, no.1, pp.21-35, 2002. 10.1023\/a:1015674004201","DOI":"10.1023\/A:1015674004201"},{"key":"6","doi-asserted-by":"publisher","unstructured":"[6] A.K. Das, R. Fierro, V. Kumar, J.P. Ostrowski, J. Spletzer, and C.J. Taylor, \u201cA vision-based formation control framework,\u201d IEEE Trans. Robot. Autom., vol.18, no.5, pp.813-825, 2002. 10.1109\/tra.2002.803463","DOI":"10.1109\/TRA.2002.803463"},{"key":"7","unstructured":"[7] S. Corporation. Specifications of vision systems. [Online]. Available: http:\/\/www.sharp-world.com\/business\/en\/image-sensor-camera\/products\/iv-s301m_311m\/spec.html"},{"key":"8","doi-asserted-by":"crossref","unstructured":"[8] G.P. Maul and N.I. Jaksic, \u201cSensor-based solution to contiguous and overlapping parts in vibratory bowl feeders,\u201d J. Manuf. Syst., vol.13, no.3, pp.190-195, 1994. 10.1016\/0278-6125(94)90004-3","DOI":"10.1016\/0278-6125(94)90004-3"},{"key":"9","unstructured":"[9] B.M. Gross, \u201cApparatus and method for counting a plurality of similar articles,\u201d U.S. Patent US4982412A, March 1989."},{"key":"10","unstructured":"[10] H. YuyamaNaoki, M. Koike, and M. Fukada, \u201cMedicine feeding device and a medicine counting device using the medicine feeding device,\u201d U.S. Patent US8985389B2, Jan. 2011."},{"key":"11","unstructured":"[11] S. Ito, F. Kojima, T. Yamamoto, Y. Motohiro, and A. Nagao, \u201cSmall parts of the counting supply device,\u201d JP Patent JP4362239B2, Feb. 2001."},{"key":"12","doi-asserted-by":"crossref","unstructured":"[12] R. Brunelli, Template Matching Techniques in Computer Vision: Theory and Practice, John Wiley & Sons, 2009. 10.1002\/9780470744055","DOI":"10.1002\/9780470744055"},{"key":"13","doi-asserted-by":"publisher","unstructured":"[13] S.K. Choudhury, R.P. Padhy, P.K. Sa, and S. Bakshi, \u201cHuman detection using orientation shape histogram and coocurrence textures,\u201d Multimed. Tools Appl., vol.78, no.10, pp.13949-13969, 2019. 10.1007\/s11042-018-6866-8","DOI":"10.1007\/s11042-018-6866-8"},{"key":"14","doi-asserted-by":"publisher","unstructured":"[14] S.K. Choudhury, P.K. Sa, R.P. Padhy, S. Sharma, and S. Bakshi, \u201cImproved pedestrian detection using motion segmentation and silhouette orientation,\u201d Multimed. Tools Appl., vol.77, no.11, pp.13075-13114, 2018. 10.1007\/s11042-017-4933-1","DOI":"10.1007\/s11042-017-4933-1"},{"key":"15","unstructured":"[15] W.T. Freeman and M. Roth, \u201cOrientation histograms for hand gesture recognition,\u201d International Workshop on Automatic Face and Gesture Recognition, vol.12, pp.296-301, 1995."},{"key":"16","doi-asserted-by":"publisher","unstructured":"[16] R. Ranjan, V.M. Patel, and R. Chellappa, \u201cHyperFace: A deep multi-task learning framework for face detection, landmark localization, pose estimation, and gender recognition,\u201d IEEE Trans. Pattern Anal. Mach. Intell., vol.41, no.1, pp.121-135, 2019. 10.1109\/tpami.2017.2781233","DOI":"10.1109\/TPAMI.2017.2781233"},{"key":"17","doi-asserted-by":"crossref","unstructured":"[17] T. Harsha and K. Fousiya, \u201cVisual orientation and recognition of an image,\u201d 2016 Online International Conference on Green Engineering and Technologies (IC-GET), pp.1-4, IEEE, 2016. 10.1109\/get.2016.7916762","DOI":"10.1109\/GET.2016.7916762"},{"key":"18","doi-asserted-by":"crossref","unstructured":"[18] D. Segarra, J. Caballeros, W.G. Aguilar, A. Sam\u00e0, and D. Rodr\u00edguez-Mart\u00edn, \u201cOrientation estimation using filter-based inertial data fusion for posture recognition,\u201d International Symposium on Algorithms and Experiments for Sensor Systems, Wireless Networks and Distributed Robotics, pp.220-233, Springer, 2018. 10.1007\/978-3-030-14094-6_15","DOI":"10.1007\/978-3-030-14094-6_15"},{"key":"19","unstructured":"[19] Q. Zhao, I. Taniguchi, M. Nakamura, and T. Onoye, \u201cAn efficient parts counting method based on intensity distribution analysis for industrial vison systems,\u201d The 21st Workshop on Synthesis and System Integration of Mixed Information Technologies (SASIMI 2018), pp.118-123, 2018."},{"key":"20","doi-asserted-by":"crossref","unstructured":"[20] A.B. Chan, Z.-S.J. Liang, and N. Vasconcelos, \u201cPrivacy preserving crowd monitoring: Counting people without people models or tracking,\u201d 2008 IEEE Conference on Computer Vision and Pattern Recognition, pp.1-7, IEEE, 2008. 10.1109\/cvpr.2008.4587569","DOI":"10.1109\/CVPR.2008.4587569"},{"key":"21","doi-asserted-by":"crossref","unstructured":"[21] V.Q. Pham, T. Kozakaya, O. Yamaguchi, and R. Okada, \u201cCount forest: Co-voting uncertain number of targets using random forest for crowd density estimation,\u201d Proc. IEEE International Conference on Computer Vision, pp.3253-3261, 2015. 10.1109\/iccv.2015.372","DOI":"10.1109\/ICCV.2015.372"},{"key":"22","doi-asserted-by":"publisher","unstructured":"[22] W. Xie, J.A. Noble, and A. Zisserman, \u201cMicroscopy cell counting and detection with fully convolutional regression networks,\u201d Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, vol.6, no.3, pp.283-292, 2018. 10.1080\/21681163.2016.1149104","DOI":"10.1080\/21681163.2016.1149104"},{"key":"23","unstructured":"[23] S. Fujisawa, G. Hasegawa, Y. Taniguchi, and H. Nakano, \u201cPedestrian counting in video sequences based on optical flow clustering,\u201d International Journal of Image Processing, vol.7, no.1, pp.1-16, 2013."},{"key":"24","doi-asserted-by":"crossref","unstructured":"[24] C. Zhang, H. Li, X. Wang, and X. Yang, \u201cCross-scene crowd counting via deep convolutional neural networks,\u201d IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2015. 10.1109\/cvpr.2015.7298684","DOI":"10.1109\/CVPR.2016.70"},{"key":"25","doi-asserted-by":"crossref","unstructured":"[25] J. Barandiaran, B. Murguia, and F. Boto, \u201cReal-time people counting using multiple lines,\u201d 2008 Ninth International Workshop on Image Analysis for Multimedia Interactive Services, pp.159-162, IEEE, 2008. 10.1109\/wiamis.2008.27","DOI":"10.1109\/WIAMIS.2008.27"},{"key":"26","unstructured":"[26] B. Lucas, \u201cAn iterative image registration technique with an application to stereo vision,\u201d Proc. 7th IJCAI, 1981."},{"key":"27","unstructured":"[27] A. Ben-Hur, D. Horn, H.T. Siegelmann, and V. Vapnik, \u201cSupport vector clustering,\u201d J. Machine Learning Research, vol.2, no.Dec, pp.125-137, 2001."}],"container-title":["IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.jstage.jst.go.jp\/article\/transfun\/E103.A\/7\/E103.A_2019EAP1134\/_pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,10,31]],"date-time":"2022-10-31T11:17:32Z","timestamp":1667215052000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.jstage.jst.go.jp\/article\/transfun\/E103.A\/7\/E103.A_2019EAP1134\/_article"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,7,1]]},"references-count":27,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2020]]}},"URL":"https:\/\/doi.org\/10.1587\/transfun.2019eap1134","relation":{},"ISSN":["0916-8508","1745-1337"],"issn-type":[{"value":"0916-8508","type":"print"},{"value":"1745-1337","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,7,1]]}}}