Representative Publications



  1. Y. Hamamoto, T. Kanaoka and S. Tomita:
    ``On a theoretical comparison between the orthonormal discriminant vector method and discriminant analysis'',
    Pattern Recognition, Vol.26, No.12, pp.1863-1867 (1993).

    Abstract:

    The performance of the orthonormal discriminant vector (ODV) method is discussed in comparision with discriminant analysis. The ODV method produces the features which maximize the Fisher criterion subject to the orthonormality of features. In contrast with discriminant analysis, the ODV method has no limitation on the maximum number of features to be extracted. From a theoretical viewpoint, it is proved that the ODV method is more powerful than discriminant analysis in terms of the Fisher criterion. The theoretical conclusion is experimentally verified using two real data sets.

    Keywords:

    feature extraction, discriminant analysism, orthonormal discriminant vector method, performance, Fisher criterion



  2. Y. Hamamoto, S. Uchimura and S. Tomita:
    ``On the behavior of artificial neural network classifiers in high-dimensional spaces'',
    IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.18, No.5, pp.571-574 (1996).

    Abstract:

    It is widely believed in the pattern recognition field that when a fixed number of training samples is used to design a classifier, the generalization error of the classifier tends to increase as the number of features gets large. In this paper, we will discuss the generalization error of the artificial neural network(ANN) classifiers in high-dimensional spaces, under a practical condition that the ratio of the training sample size to the dimensionality is small. Experimental results show that the generalization error of ANN classifiers seems much less sensitive to the feature size than 1-NN, Parzen and quadratic classifiers.

    Keywords:

    artificial neural networks, generalization error, dimensionality, training sample size, peaking phenomenon, 1-NN classifier, Parzen classifier



  3. Y. Hamamoto, Y. Fujimoto and S. Tomita:
    ``On the estimation of a covariance matrix in designing Parzen classifiers,''
    Pattern Recognition, Vol.29, No.10, pp.1751-1759 (1996).

    Abstract:

    The design of the Parzen classifiers requires careful attention to the window-width as well as kernel covariance matrices. Although a considerable amount of effort has been devoted to the selection of the window-width, the problem of estimating kernel covariance matrices has received little attention in the past. In this paper, we discuss the kernel covariance estimators for the design of the Parzen classifiers. We compare the performance of the Parzen classifiers based on several kernel covariance estimators such as the Toeplitz, Ness's and orthogonal expansion estimators on three artificial data sets. From experimental results, we recommend the use of the Toeplitz estimator, particularly in high-dimensional spaces.

    Keywords:

    Parzen classifier, kernel covariance, small training sample size, high dimensions, error rate



  4. Y. Hamamoto, S. Uchimura and S. Tomita:
    ``A bootstrap technique for nearest neighor classifier design'',
    IEEE Trans. Pattern Analysis and Machine Intelligence, Vol.19, No.1, pp.73-79 (1997).

    Abstract:

    A bootstrap technique for nearest neighbor classifier design is proposed. Our primary interest in designing a classifier is in small training sample size situations. Conventional bootstrapping techniques sample the training samples with replacement. On the other hand, our technique generates bootstrap samples by locally combining original training samples. The nearest neighbor classifier is designed on the bootstrap samples and is tested on the test samples independent of training samples. The performance of the proposed classifier is demonstrated on three artificial data sets and one real data set. Experimental results show that the nearest neighbor classifier designed on the bootstrap samples outperforms the conventional k-NN classifiers as well as the edited 1-NN classifiers, particularly in high dimensions.

    Keywords:

    bootstrap, nearest neighbor classifier, error rate, peaking phenomenon, small training sample size, high dimensions, outlier



  5. Y. Hamamoto,S. Uchimura, M. Watanabe, T. Yasuda, Y. Mitani and S. Tomita:
    ``A Gabor filter-based method for recognizing handwritten numerals'',
    Pattern Recognition, Vol.31, No.4, pp.395-400 (1998).

    Abstract:

    We study a Gabor filter-based method for handwritten numeral character recognition. The Gabor filter is based on a multi-channel filtering theory for processing visual information in the early stages of the human visual systems. The performance of the Gabor filter-based method is demonstrated on the ETL-1 database. Experimental results show that the artificial neural network classifier achieved the error rate of 2.34% for a test set of 7000 characters. Therefore, the Gabor filter-based method should be considered in recognition of handwritten numeric characters.

    Keywords:

    numeral character recognition, Gabor filter, feature extraction

Home


Copyright(c) 2003 the Pattern Recognition Group at Dept. of Computer Science and Systems Engineering, Faculty of Engineering Yamaguchi University