| No. of records per page: 10 | 20 | 30 | 50 | 100 | Show all |
Record ID: 105 [ Page 2 of 4, No. 1 ]
Authors: Vio Jianu C. Mojica and Frumencio F. Co
Abstract:
An ideal outbreak detection algorithm must be able to generate alarms early into an outbreak while providing optimal sensitivity and specificity so as to mitigate mortality and other potential costs of investigation and response to these events. One particular disease of interest is measles, which is a highly contagious disease that exhibited periodic outbreaks in the Philippines. The performance of the NGINAR(1) and ZINGINAR(1) models for measles outbreak detection was examined through the use of simulated datasets and an actual application to reported measles cases in the Cavite province from 2010 to 2017. The models were evaluated based on their goodness-of-fit as well as the sensitivity, specificity, and timeliness of the detection thresholds they have generated. Comparisons were done against ARIMA models and the popular Poisson INAR(1) model. Results show that INAR models have considerably higher probabilities of detection than ARIMA models, particularly for outbreaks of small magnitudes. The Poisson INAR(1) generates the most alarms and thus, has the highest sensitivity metrics. The NGINAR(1) and ZINGINAR(1) models, however, have lower false positive rates with outbreak detection capabilities comparable to the Poisson INAR(1). The NGINAR(1) model may be chosen as the best model considering its simplicity and its balance of sensitivity, specificity, and timeliness which is optimal for a disease such as measles.
Keywords: NGINAR(1), ZINGINAR(1), measles, outbreak detection, Cavite
Year: 2019 Vol.: 68 No.: 1
Record ID: 114 [ Page 2 of 4, No. 2 ]
Authors: Francisco N. de los Reyes
Abstract:
A commonly studied characteristic of area data is the assessment of similarity (or absence thereof) among neighboring areal units. However, most methodologies do not measure uncertainties which are likely outcomes of sampling variation and do not consider spatial autocorrelation. This paper explores the ability of Bayesian modeling to address the said situations. It attempts to apply this modeling technique to the voting participation statistics in the Philippine National and Local Elections of 2016.
Keywords: conditional autoregressive (CAR), proximity matrix, dissimilarity, voter turnout
Year: 2018 Vol.: 67 No.: 1
Record ID: 113 [ Page 2 of 4, No. 3 ]
Authors: Peter Julian Cayton
Abstract:
In this paper, we discuss the folding procedure for the peaks-overthresholds (POT) models and their applications in market risk measurement, namely the value-at-risk (VaR) and the expected shortfall (ES). Folding is deï¬ned as a procedure in which when data fall below a certain threshold value, a transformation formula will move the data points above the threshold. First, an initial ï¬tting with the generalized Pareto distribution (GPD) over a temporary threshold is done. Second, from the initially-ï¬tted GPD estimates and a newly-selected threshold, a folding transformation of moves the data points lower to the new threshold to higher values. Third, the data points higher than the new threshold are ï¬t to the GPD for inference and risk estimation. The risk measures from the folded GPD approach are compared with the ARMA-GARCH ï¬nancial econometric and the unfolded POT approach in terms of their performance in real ï¬nancial time series data such as the stock indices and foreign currencies. The beneï¬t of folding in the POT approach is lower estimates of standard errors for the GPD parameters given that an appropriate threshold has been selected. These would indicate more accurate GPD parameter estimates that lead to better VaR and ES estimates. The real data application results show that the VaR and ES from the folded POT methodology have less exceedances. Loss calculations indicate that those folded POT might mean higher capital adequacy, the conservatively set VaR and ES would cushion from extreme losses incurred from exceedance events.
Keywords:
Year: 2018 Vol.: 67 No.: 1
Record ID: 112 [ Page 2 of 4, No. 4 ]
Authors: Manuel Leonard Albis and Jessmond Elviña
Abstract:
Multidimensional poverty index (MPI) captures more welfare characteristics than the income- or expenditure-based poverty measures. It is an emerging social statistic, which must be understood to guide poverty alleviation policies. This paper finds robust employment characteristics on MPI using Bayesian averaging of classical estimates (BACE). The results indicate that being employed decreases MPI but length and nature of employment add to the MPI. Community public goods, as well as remittances, decrease the MPI, among other control variables considered. Priority through uplifting policy measures should be given more to laborers who are working for different employers than contractual workers if the aim is to reduce MPI.
Keywords: MPI, underemployment, BACE
Year: 2018 Vol.: 67 No.: 1
Record ID: 111 [ Page 2 of 4, No. 5 ]
Authors: Majah-Leah Ravago, Dennis Mapa, Jun Carlo Sunglao and James Roumasset
Abstract:
We explored how local governments respond to disasters due to natural hazards to determine the mix of risk management and coping strategies (ex ante and ex post) they employ to improve welfare. We focused on disasters caused by hydro-meteorological hazards that occur with high frequency and high probability. Using data from a novel survey we conducted on disaster risk management practices of local government units (LGUs) in the Philippines, we developed indices of the various risk management and coping strategies of LGUs to explain what aids in their recovery from disasters. The most prominent strategies are risk-coping activities, especially cleanup operations and receiving relief from others. Among ex ante activities, employing long-term precautionary measures improve recovery. These include building resilient housing units; investing in stronger public facilities; building dams, dikes, and embankments; upgrading power and water lines; maintaining roads; identifying relocation areas; and rezoning and land-use regulations. In contrast, interruption of lifeline services such as water and electricity contributes adversely to recovery. Evidence also shows that LGUs’ profile characteristics matter. An LGU with higher local revenues has higher chances of recovery. On the other hand, being located in a province where dynasty share is high contributes negatively to an LGU’s recovery. The combination of these ex ante and ex post risk management strategies informs policies on where to put priority and investments in disaster risk management.
Keywords: Disaster, shock, coping, risk management, local government
Year: 2018 Vol.: 67 No.: 1
Record ID: 110 [ Page 2 of 4, No. 6 ]
Authors: Lisa Grace S. Bersales, Divina Gracia L. del Prado and Mae Abigail O. Miralles
Abstract:
The current official measurement of poverty published by the Philippine Statistics Authority is based on income. This does not capture the multidimensional deprivations suffered by Filipinos. This paper discussed a multidimensional poverty index (MPI) for the Philippines using four (4) dimensions with thirteen (13) indicators. These dimensions are education; health and nutrition; housing, water and sanitation; and employment. The Alkire Foster (AF) method in computing multidimensional poverty measures is adopted with nested uniform weights as the weighting scheme and 1/3 as poverty cutoff. Various weighting schemes are also explored in this study - nested inverse incidence and subjective welfare, and other poverty cutoffs studied are 1/4 and 1/5. Results revealed that the selection of weighting scheme and poverty cutoff do not greatly affect the trend of the multidimensional poverty measures and the ranks of the dimensions in terms of their contribution to multidimensional poverty.
Keywords: multidimensional poverty, MPI, poverty, headcount ratio, intensity
Year: 2018 Vol.: 67 No.: 1
Record ID: 104 [ Page 2 of 4, No. 7 ]
Authors: Novee Lor Leyso, Arturo Martinez Jr., and Iva Sebastian
Abstract:
Recognizing that urban areas play a key role in addressing poverty and inequality in line with the Sustainable Development Goals (SDGs) 1 and 10, respectively, it is necessary to understand the dynamics of economic well-being of people living in urban areas to be able to formulate appropriate and effective strategies. Using economic mobility as a metric of well-being, this study aims to examine whether population size of urban areas has an impact on people's mobility prospects. We investigate this issue using longitudinal expenditure data from Indonesia and the Philippines. Our results show that city size has mixed effect on directional mobility in Indonesia and the Philippines; it has a negative but significant impact on the probability of Indonesians to experience upward mobility, but its effect on the probability of Filipinos to experience upward mobility is positive. On the other hand, in both countries, people living in megacities and micro urban areas experience more non-directional mobility with respect to several economic mobility measures.
Keywords: Economic mobility, Urbanization, Urban Poverty, Inequality, City Size, Panel Data, and Multinomial Logistic Regression
Year: 2017 Vol.: 66 No.: 2
Record ID: 103 [ Page 2 of 4, No. 8 ]
Authors: Isabella Benabaye, Patricia Rose Donato and John D. Eustaquio
Abstract:
In making and assessing family planning policies and programs, it is vital to investigate fertility preference as it does not only reveal a woman's ideal number of children and the couple's consensus on it, but also captures information on unwanted and mistimed pregnancies. The theoretical relationships of a woman's ideal number of children with micro-level factors such as a woman's experience with child mortality, her level of household authority, and household family planning awareness were examined under two cases. First, among women who have achieved their fertility preference, and secondly, among women who have not achieved their fertility preference. This study also examined the factors affecting the contraceptive behavior of women who have not achieved their fertility preference, specifically for a) contraceptive users, b) non-users who intend to use contraceptives later, and c) non-users with no intention to use. The difference in the behavior of factors influencing the ideal number of children between women who have and have not met their fertility preference showed that instead of factors related to family planning, the ideal number of children for women with unmet fertility preference is decreased by factors that suggest lack of women's empowerment. On the other hand, analysis on contraceptive behavior found possible factors that can hinder the realization of women's intention to practice contraception.
Keywords: fertility preference, contraceptive behavior, poisson count model, binary regression
Year: 2017 Vol.: 66 No.: 2
Record ID: 102 [ Page 2 of 4, No. 9 ]
Authors: Joshua Mari J. Paman, Frank Niccolo M. Santiago, Vio Jianu C. Mojica, Frumencio F. Co, and Robert Neil F. Leong
Abstract:
It is the goal of many developing countries to stop the spread of diseases. Part of this effort is to conduct ongoing surveillance of disease transmission to foresee future epidemics. However, in the Philippines, there is a lack of an automated method in determining their presence. This paper presents a comparison between an integer-valued autoregressive (INAR) model and the more commonly known autoregressive integrated moving average(ARIMA) models in detecting the presence of disease outbreaks. Daily measles reports spanning from January 1, 2010 to January 14, 2015 were obtained from the Department of Health and were used to motivate this study. Synthetic datasets were generated using a modified Serfling model. Similarity tests using a dynamic time warping algorithm were conducted to ensure that simulated datasets observe similar behavior with the original set. False positive rates, sensitivity rates, and delay in detection were then evaluated between the two models. The results gathered show that an INAR model performs favorably compared to an ARIMA model, posting higher sensitivity rates, similar lag times, and equivalent false positive rates for three-day signal events.
Keywords: measles, biosurveillance, integer-valued autoregressive model, Serfling model, dynamic time warping
Year: 2017 Vol.: 66 No.: 2
Record ID: 101 [ Page 2 of 4, No. 10 ]
Authors: Suntaree Unhapipat, Nabendu Pal and Montip Tiensuwan
Abstract:
This paper takes a fresh look on point estimation of model parameters under a Zero-Inflated Poisson (ZIP) distribution. The reason is that some finer details of point estimation, if overlooked, may lead to wrong estimates as was done by the earlier researchers. In this paper we have achieved the following new results: (a) A new set of corrected method of moments estimators has been proposed; (b) We have shown how the standard technique of differentiating the log-likelihood function to find the maximum likelihood estimators may lead to wrong estimates, as well as how to avoid this problem; and (c) A new adjusted maximum likelihood estimation technique has been proposed which not only produces meaningful estimates always, but also appears to work better compared to all other estimation techniques in terms of standardized mean squared error (SMSE) when ZIP is used to model rare events. Finally, datasets on rare events have been used to demonstrate the estimation techniques, and how the ZIP distribution can be used to model such datasets.
Keywords: Maximum likelihood estimation, method of moments estimation, standardized mean squared error, standardized bias, goodness of fit test.
Year: 2017 Vol.: 66 No.: 2
Record ID: 100 [ Page 2 of 4, No. 11 ]
Authors: Rama Shanker and Kamlesh Kumar Shukla
Abstract:
A zero-truncated new quasi Poisson-Lindley distribution (ZTNQPLD), which includes zero-truncated Poisson-Lindley distribution (ZTPLD) as a particular case, has been studied. Its probability mass function has also been obtained by compounding size-biased Poisson distribution (SBPD) with an assumed continuous distribution. The rth factorial moment of ZTNQPLD have been derived and hence its raw moments and central moments have been presented. The expressions for coefficient of variation, skewness, kurtosis, and index of dispersion have been given and their nature and behavior have been studied graphically. The method of maximum likelihood estimation has been discussed for estimating the parameters of ZTNQPLD. Finally, the goodness of fit of ZTNQPLD has been discussed with some datasets and the fit has been found better as compared with zero truncated Poisson distribution (ZTPD) and zero- truncated Poisson- Lindley distribution (ZTPLD).
Keywords: Zero-truncated distribution, New quasi Poisson-Lindley distribution, compounding, moments, Maximum Likelihood estimation, Goodness of fit.
Year: 2017 Vol.: 66 No.: 2
Record ID: 99 [ Page 2 of 4, No. 12 ]
Authors: Michael Van B. Supranes and Joseph Ryan G. Lansangan
Abstract:
One approach in modeling high dimensional data is to apply an elastic net (EN) regularization framework. EN has the good properties of least absolute shrinkage selection operator (LASSO), however, EN tends to keep variables that are strongly correlated to the response, and may result to undesirable grouping effect. The Layered Elastic Net Selection (LENS) is proposed as an alternative framework of utilizing EN such that interrelatedness and groupings of predictors are explicitly considered in the optimization and/or variable selection. Assuming groups are available, LENS applies the EN framework group-wise in a sequential manner. Based on the simulation study, LENS may result to an ideal selection behavior, and may exhibit a more appropriate grouping effect than the usual EN. LENS results to poor prediction accuracy, but applying OLS on the selected variables may yield optimum results. At optimal conditions, the mean squared prediction error of OLS on LENS-selected variables are on par with the mean squared prediction error of OLS on EN-selected variables. Overall, applying OLS on LENS-selected variables makes a better compromise between prediction accuracy and ideal grouping effect.
Keywords: regression, variable selection, variable clustering, high dimensional data, elastic net, grouping effect
Year: 2017 Vol.: 66 No.: 2
Record ID: 98 [ Page 2 of 4, No. 13 ]
Authors: Alex C. Gonzaga
Abstract:
We derive the asymptotic properties of discrete wavelet packet transform (DWPT) of generalized long-memory stochastic volatility (GLMSV) model, a relatively general model of stochastic volatility that accounts for persistent (or long-memory) and seasonal (or cyclic) behavior at several frequencies. We derive the rates of convergence to zero of between-scale and within-scale wavelet packet coefficients at different subbands. Wavelet packet coefficients in the same subband can be shown to be approximately uncorrelated by appropriate choice of basis vectors using a white noise test. These results may be used to simplify the variance-covariance matrix into a diagonalized matrix, whose diagonal elements have the least distinct variances to compute.
Keywords: discrete wavelet packet transform, generalized longmemory stochastic volatility, asymptotic decorrelation
Year: 2017 Vol.: 66 No.: 2
Record ID: 96 [ Page 2 of 4, No. 14 ]
Authors: Ali H. Abuzaid and Raida F. Zaqout
Abstract:
This study addresses the factors that have an effect on the weaning time of the Palestinian children based on the Palestinian family survey data in 2006. It was found that the Weibull parametric model is the most appropriate one to fit the data. The study showed that factors such as child’s weight at birth, child’s age, mother’s age at delivery, and mother’s educational status have significant effects on the weaning time. The findings also revealed that factors such as mother’s refugee status, locality type, total live births, and mother’s smoking status do not have any significant effect at 0.05 level of significance on the duration of breastfeeding.
Keywords: breastfeeding, censored, Cox proportional model, Wald statistic
Year: 2017 Vol.: 66 No.: 1
Record ID: 95 [ Page 2 of 4, No. 15 ]
Authors: Michelle B. Besana and Philip Ian P. Padilla
Abstract:
The analysis of covariance model (ANCOVA) with heterogeneous variance first-order autoregressive error covariance structure (ARH1) was used to model the differences in fecal streptococci concentration in Iloilo River over time with fixed site and seasonal effects as primary factors of interest, and water temperature, pH, dissolved oxygen, and salinity as covariates. The restricted maximum likelihood estimation (REML) procedure was used to derive the parameter estimates and the Kenward-Roger adjustment in the degrees of freedom was used to better approximate the distributions of the test statistics. The effect of season was highly significant (p = 0.0019). The site effect was significant at the 0.0539 level. The effects of water surface temperature and pH were significant at the 0.0655 and 0.0828 level, respectively. The effects of dissolved oxygen and salinity were not significant. Although the coefficient of determination was modest, the result of the study is useful in characterizing the dynamics of Iloilo River bacteriological system which contributes to an improved understanding of the Iloilo River water quality.
Keywords: analysis of covariance (ANCOVA), heterogeneous variance first-order autoregressive error covariance structure (ARH1), restricted maximum likelihood estimation (REML), fecal indicator bacteria (FIB), fecal streptococcus
Year: 2017 Vol.: 66 No.: 1
Record ID: 94 [ Page 2 of 4, No. 16 ]
Authors: Mynard Bryan R. Mojica and Claire Dennis S. Mapa
Abstract:
Financial inclusion has become a policy priority in many developing countries, including the Philippines. However, the issue of its robust measurement is still outstanding. The challenge comes from the fact that financial inclusion is a multidimensional phenomenon. A comprehensive measure is therefore needed to adequately gauge the inclusiveness of a financial system. This paper constructed a Financial Inclusion Index (FII) to measure access to and usage of financial services in the Philippines using provincial data. Results show that while there are marked geographical disparities based on the FII, there is significant positive spatial autocorrelation indicating that nearby provinces exhibit similar levels of financial inclusion. The paper also showed the relationship between the FII and some variables that are often linked to financial inclusion such as income, poverty, literacy, and employment as well the province’s level of human development and competitiveness. On the methodological side, possible improvements and technical innovations in constructing the FII are laid out to maximize its potential as an analytical tool for surveillance and policy-making.
Keywords: inclusive finance, composite indicator, financial inclusion index
Year: 2017 Vol.: 66 No.: 1
Record ID: 93 [ Page 2 of 4, No. 17 ]
Authors: Dixi M. Paglinawan
Abstract:
We compared ratio and regression estimators empirically based on bias and coefficient of variation. Simulation studies accounting for sampling rate, population size, heterogeneity of the auxiliary variable x, deviation from linearity and model misspecification were conducted. The study shows that ratio estimator is better than regression estimators when regression line is close to the origin. Ratio and regression estimators still work even if there is a weak linear relationship between x and y, provided that there is minimal, if not absent, model misspecification. When the relationship between the target variable and the auxiliary variable is very weak, bootstrap estimates yield lower bias. Regression estimator is generally more efficient than ratio estimator.
Keywords: auxiliary variable, ratio estimator, regression estimator, bootstrap estimator
Year: 2017 Vol.: 66 No.: 1
Record ID: 92 [ Page 2 of 4, No. 18 ]
Authors: Gajendra K. Vishwakarma and Sayed Mohammed Zeeshan
Abstract:
In this paper, a class of ratio-cum-product type exponential estimators have been proposed under simple random sampling to estimate the population mean. The proposed class of estimators has been compared with the other existing estimators. We have compared the efficiency of the proposed class of estimators with the other standard estimators and found through empirical study that the previous estimators are inferior to the present proposed class of estimators. The population used in the empirical study are all varying very much from each other and thus it demonstrate the superiority of the present estimators under all type of situation.
Keywords: auxiliary variable, study variable, simple random sampling, ratio type estimators, product type estimators, bias, MSE
Year: 2017 Vol.: 66 No.: 1
Record ID: 91 [ Page 2 of 4, No. 19 ]
Authors: Robert Neil F. Leong, Frumencio F. Co, Vio Jianu C. Mojica and Daniel Stanley Y. Tan
Abstract:
Inspired by the capability of exponentially-weighted moving average (EWMA) charts to balance sensitivity and false alarm rates, we propose one for zero-truncated Poisson processes. We present a systematic design and analytic framework for implementation. Further, we add a fast initial response (FIR) feature which ideally increases sensitivity without compromising false alarm rates. The proposed charts (basic and with FIR feature) were evaluated based on both in-control average run length (ARL0) to measure false alarm rate and out-of-control average run length (ARL1) to measure sensitivity to detect unwanted shifts. The evaluation process used a Markov chain intensive simulation study at different settings for different weighting parameters (ω). Empirical results suggest that for both scenarios, the basic chart had: (1) exponentially increasing ARLs as a function of the chart threshold L; and (2) ARLs were longer for smaller ωs. Moreover, the added FIR feature has indeed improved ARL1 within the range of 5% - 55%, resulting to quicker shift detections at a relatively minimal loss in ARL0. These results were also compared to Shewhart and CUSUM control charts at similar settings, and it was observed that the EWMA charts generally performed better by striking a balance between higher ARL0 and lower ARL1. These advantages of the EWMA charts were more pronounced when larger shifts in the parameter λ happened. Finally, a case application in monitoring hospital surgical out-of-controls is presented to demonstrate its usability in a real-world setting.
Keywords: exponentially-weighted moving average control chart, zero-truncated Poisson process, fast initial response feature, average run length, infection control
Year: 2017 Vol.: 66 No.: 1
Record ID: 90 [ Page 2 of 4, No. 20 ]
Authors: Erniel B. Barrios and Kevin Carl P. Santos
Abstract:
We introduce panel models and identify its link to spatial-temporal models. Both models are characterized and differentiated through the variance-covariance matrix of the disturbance term. The resulting estimates or tests are as complicated as the nature of the said variance-covariance matrix. Some iterative methods typically used in computational statistics are also presented. These methods are used in conducting statistical inference for spatial-temporal models.
Keywords: panel data, spatial-temporal model, forward search algorithm, additive models, backfitting algorithm, isotonic regression
Year: 2017 Vol.: 66 No.: 1
Record ID: 89 [ Page 2 of 4, No. 21 ]
Authors: Mia Pang Rey and Ivy D.C. Suan
Abstract:
The objective of this paper is to present a theoretical model that can assist community-based health maintenance providers in handling their actuarial risk. It determines the factors and conditions under which the said model can be made financially sustainable. The break-even formulas for some of the parameters are derived. It likewise examines the amount of reserves needed to manage underwriting risk.
Keywords: health maintenance programs, sustainability
Year: 2016 Vol.: 65 No.: 2
Record ID: 88 [ Page 2 of 4, No. 22 ]
Authors: Aldrin Y. Cantila, Sailila E. Abdula, Haziel Jane C. Candalia and Gina D. Balleras
Abstract:
Rice released varieties are genetic resources bulked with good genes. To define the potentials of these germplasm, genetic divergence analysis must be done. The study used different statistical tools such as descriptive statistics, Kolmogorov-Smirnov test, Shannon-Weaver diversity index (H’), correlation statistics (r), principal component analysis (PCA), Dixon’s test and clustering statistics in evaluating 29 NSIC (National Seed Industry Council) released varieties based on 11 morphological traits. Descriptive statistics showed significant differences on the traits used while following a normal distribution. Shannon-Weaver diversity derived a range of 0.55 (number of filled grain per panicle, NFGP) to 0.91 (grain yield, GY and number of tillers, NT) that infer moderate to high diversity traits. Correlation statistics among traits showed a range of r = -0.55 to 0.84 which GY was noted to positively correlate to all traits. PCA accounted 39.95% and 26.10% for PC1 and PC2, respectively. Notable component loading for the yield component traits such as panicle weight (PW) showed the highest contributor of positive projections in two PCs that explained 66.05% of the variation. PCA also detected two latent traits such GY and spikelet fertility (SF) as confirmed in Dixon’s test where outlier was found in SF and to yield contributing traits. Clustering statistics separated varieties into 5 clusters with a range of 5.88 to 106.22 euclidean distance (ED). Among the clusters, 5th cluster composed of one variety, NSIC Rc240 gave the highest GY (7.07 tha-1), NFGP (152.67), one thousand grain weight (24.77 g), PW (5.08 g) and spikelet number per panicle (185.33). The variety could potentially be adapted and a good source of genes for rice improvement localize at General Santos City.
Keywords: clustering statistics, correlation statistics, descriptive statistics, Shannon-Weaver index, rice released varieties, principal component analysis
Year: 2016 Vol.: 65 No.: 2
Record ID: 87 [ Page 2 of 4, No. 23 ]
Authors: Divo Dharma Silalahi, Consorcia E. Reaño, Felino P. Lansigan, Rolando G. Panopio and Nathaniel C. Bantayan
Abstract:
Using Near Infrared Spectroscopy (NIRS) spectral data, the Linear Discriminant Analysis (LDA) performance was compared with the Genetic Algorithm Neural Network (GANN) to solve the classification or assigning problem for ripeness grading of oil palm fresh fruit. The LDA is known as one of the famous classical statistical techniques used in classification problem and dimensionality reduction. The GANN is a modern computational statistical method in terms of soft computing with some adaptive nature in the system. The first four new components variables as result of Principal Component Analysis (PCA) also were used as input variables to increase the efficiency and made the data analysis process faster. Based on the results, both in training and validation phase GANN technique had lower Mean Absolute Error (MAE) and Root Mean Square Error (RMSE), higher percentage of correct classification and suitable to handle large amount of data compared to the LDA technique. Therefore, the GANN technique is superior in terms of precision and less error rate to handle a hyperdimensional problem for data analysis in ripeness classification of oil palm fresh fruit compared to the LDA.
Keywords: Near Infrared Spectroscopy, Neural Network, Genetic Algorithm, Linear Discriminant Analysis, Principal Component Analysis, Oil Palm, Ripeness
Year: 2016 Vol.: 65 No.: 2
Record ID: 86 [ Page 2 of 4, No. 24 ]
Authors: May Ann S. Estoy and Joseph Ryan G. Lansangan
Abstract:
Quantile regression and restricted maximum likelihood are incorporated into a backfitting approach to estimate a linear mixed model for clustered data. Simulation studies covering a wide variety of scenarios relating to clustering, presence of outliers, and model specification error are conducted to assess the performance of the proposed methods. The methods yield biased estimates yet high predictive ability compared to ordinary least squares and ordinary quantile regression.
Keywords: linear mixed models; quantile regression; restricted maximum likelihood; backfitting; bootstrap; clustered data
Year: 2016 Vol.: 65 No.: 2
Record ID: 85 [ Page 2 of 4, No. 25 ]
Authors: John D. Eustaquio
Abstract:
A nonparametric test for clustering in survival data based on the bootstrap method is proposed. The survival model used considers the isotonic property of the covariates in the estimation via the backfitting algorithm. Assuming a model that incorporates the clustering effect into the piecewise proportional hazards model, simulation studies indicate that the procedure is correctly-sized and powerful in a reasonably wide range of scenarios. The test procedure for the presence of clustering over time is also robust to model misspecification.
Keywords: Bootstrap confidence interval; Survival Analysis; Clustered Data; Backfitting Algorithm; Generalized Additive Models; Nonparametric bootstrap.
Year: 2016 Vol.: 65 No.: 2
Record ID: 84 [ Page 2 of 4, No. 26 ]
Authors: Daniel R. Raguindin and Joseph Ryan G. Lansangan
Abstract:
A semiparametric probit model for high dimensional clustered data and its estimation procedure are proposed. The model is characterized by flexibility in the model structure through a nonparametric formulation of the effect of the predictors on the dichotomous response and a parametric specification of the inherent heterogeneity due to clustering. The predictive ability of the model is further investigated by looking at possible factors such as dimensionality, presence of misspecification, clustering, and response distribution. Simulation studies illustrate the advantages of using the proposed model over the ordinary probit model even in low dimensional cases. High predictive ability is observed in high dimensional cases especially when the distribution of the response categories is balanced. Results show that cluster distribution and functional form of the response variable do not affect the performance of the model. Also, the predictive ability of the proposed estimation increases as the number of clusters increases. Under the presence of misspecification, the predictive ability of the model is slightly lower yet remains better than the ordinary probit model.
Keywords: probit model, high dimensional data, backfitting algorithm, local scoring algorithm
Year: 2016 Vol.: 65 No.: 2
Record ID: 97 [ Page 2 of 4, No. 27 ]
Authors: Paul Eric G. Abeto and Joseph Ryan G. Lansangan
Abstract:
Monitoring processes in an industry is one means to ensure the quality of goods produced or services provided. Control charts are constructed by estimating control limits wherein the process could be identified as stable. The estimation is made by analyzing the behavior of the monitored process. However, the assumptions of uncorrelatedness and normality of the measurements, common in most control charts, are sometimes uncharacteristic of the monitored process. Also, data from other variables may be available and may provide meaningful information on the behavior of the monitored process, and thus may be valuable in the estimation of the control limits. In this paper, a methodology of using sparse principal component regression from high dimensional exogenous variables to estimate control limits of autocorrelated processes is proposed. Simulations are made to further study different scenarios that may affect the proposed estimation. The false alarm rate, average run length during stable periods, and first detection rate upon structural change are used as key indicators for characterization and/or comparison. Simulation results suggest that modelling a process using high dimensional exogenous variables through sparse principal components creates better estimation of its corresponding control chart parameters. False alarm rates and average run lengths were comparable with the Exponentially Weighted Moving Average (EWMA) control chart. Also, faster identification of structural change was observed potentially due to the fact that the process is modelled in terms of other information carried by the exogenous variables.
Keywords: Control chart, autocorrelated process, high dimensional data, sparse principal component regression
Year: 2016 Vol.: 65 No.: 2
Record ID: 83 [ Page 2 of 4, No. 28 ]
Authors: Divina Gracia L. Del Prado and Erniel B. Barrios
Abstract:
A spatiotemporal model with nested random effects is proposed for small area estimation where sample data are generated from a rotating panel survey. Two methods of estimation are introduced, integrating the backfitting algorithm and bootstrap procedure in two different approaches. Simulation study shows superior predictive ability of the fitted model. The small area estimation methods also produced efficient estimates of parameters in a wide class of population scenarios. The model-based small area estimation procedure is also better over the design-based approach in estimating unemployment rate from the Philippine Labor Force Survey.
Keywords: spatiotemporal mixed model; small area estimation; backfitting algorithm; bootstrap.
Year: 2016 Vol.: 65 No.: 2
Record ID: 82 [ Page 2 of 4, No. 29 ]
Authors: Karl Anton M. Retumban
Abstract:
The interdependence of the Philippine Stock Exchange Sector Indices was analyzed using Johansen’s Cointegration test, Granger-Causality and Forecast Error Variance Decomposition. Daily, weekly and monthly data were used from January 2006 up to June 2015.The results confirm existence of cointegration among the six sector indices implying that the indices follow a common trend and have a long-run relationship. This is true across the daily, weekly and monthly data. There is also a uni-directional causality existing among the sector indices. Aside from the sector indices own shock largely influencing its own variation, the innovations from the financial sector index significantly contributes to the variation of other sector indices.
Keywords: Johansen’s Cointegration, Granger Causality, Forecast Error Variance Decomposition, Philippine Stock Exchange Sector Indices
Year: 2016 Vol.: 65 No.: 1
Record ID: 81 [ Page 2 of 4, No. 30 ]
Authors: Arturo Martinez Jr., Mark Western, Wojtek Tomaszewski, Michele Haynes, Maria Kristine Manalo, and Iva Sebastian
Abstract:
Using counterfactual simulations, we investigate the various factors that could explain the changes observed in poverty and inequality in the Philippines over the past decade. To do this, we decomposed per capita household income as a stochastic function of various forms of socio-economic capital and the socio-economic returns to capital. The results indicate that the higher levels of ownership of assets and higher economic returns to formal and non-agricultural employment have contributed to lower poverty while human capital and access to basic services remain stagnant and thus, had no impact on poverty and inequality. In general, we find that the impact of changes in socio-economic capital and changes in economic returns to capital as offsetting forces that contribute to slow poverty and inequality reduction despite the rapid economic growth that the Philippines has experienced over the past ten years.
Keywords: income decomposition, counterfactual simulation, poverty, inequality
Year: 2016 Vol.: 65 No.: 1
Record ID: 80 [ Page 2 of 4, No. 31 ]
Authors: Paolo Victor T. Redondo
Abstract:
Purposive sampling is a non-probability sampling method which is oftentimes used whenever random/probability sampling is not efficient, too costly (either in finance or time) and not feasible. Also, most of the data collected for studies in the present time exhibit the property of count and thus, analysis of such data needs the appropriate tool; commonly the Poisson Regression. The goal of this study is to determine whether the relative location-based purposive sampling can improve the estimates produced by the Poisson regression and if the proposed sampling procedure can reduce the required sample size to have a more efficient and good quality results simultaneously. Simulation of different scenarios are done and several possible partitions (based on relative location) from where the sample will come from are considered. Some partitions are deemed to work better even for small sample size, say 50, while others work as good as their respective simple random sample counterparts.
Keywords: purposive sampling, poisson regression, sample size
Year: 2016 Vol.: 65 No.: 1
Record ID: 79 [ Page 2 of 4, No. 32 ]
Authors: Jachelle Anne Dimapilis
Abstract:
We propose a procedure for monitoring progress of sustainable development measured by indices. AR-sieve-based nonparametric prediction interval is constructed to determine whether the movement of the indices is significant or not. Points outside the interval are considered significant and imply positive or negative movement of the indices. This method is used in the construction of prediction interval for sustainable development index for the Philippine. The interval is indeed capable of detecting significant movements that can be explained by policies and other factors.
Keywords: sustainable development index, AR-sieve bootstrap, nonparametric prediction interval
Year: 2016 Vol.: 65 No.: 1
Record ID: 78 [ Page 2 of 4, No. 33 ]
Authors: Joselito C. Magadia
Abstract:
A self-exciting threshold autoregressive (SETAR) model will be fitted to PSEi and value-at-risk estimates would be computed. Backtesting procedures would be employed to assess the accuracy of the estimates and compared with estimates derived from two other approaches to VaR estimation.
Keywords: threshold models, backtesting, APARCH
Year: 2016 Vol.: 65 No.: 1
Record ID: 77 [ Page 2 of 4, No. 34 ]
Authors: Michael Daniel C. Lucagbo
Abstract:
The task of classifying Philippine households according to their socioeconomic class (SEC) has been tackled anew in a collaborative work between the Marketing and Opinion Research Society of the Philippines (MORES), the former National Statistics Office (NSO) and the University of the Philippines School of Statistics. This new system of classifying Philippine households has been introduced in the 12th National Convention on Statistics, in a paper entitled 1SEC 2012: The New Philippine Socioeconomic Classification. To predict the SEC of a household, certain household characteristics are used as predictors. The 1SEC Instrument, whose scoring system is based on the ordinal logistic regression model, is then used to predict the household’s SEC. Recently, the statistical literature has seen the development of novel tree-based learning algorithms. This paper shows that the ordinal logistic regression model can still classify households better than three popular tree-based statistical learning methods: bootstrap aggregation (or bagging), random forests, and boosting. In addition, this paper identifies which clusters are easier to predict than others.
Keywords: socioeconomic classification, ordinal logistic regression, bagging, random forests, boosting
Year: 2016 Vol.: 65 No.: 1
Record ID: 76 [ Page 2 of 4, No. 35 ]
Authors: Michael Van B. Supranes; John Francis J. Guntan; Joy Pauline Adrienne C. Padua; Joseph Ryan G. Lansangan
Abstract:
Range restriction is a known cause of underestimation in the Cronbach’s Alpha reliability coefficient. The estimate of the Cronbach’s Alpha is usually adjusted to minimize bias, but existing methods require information about the population. In the case of indirect range restriction however, such information may not be readily or intuitively available. A data-driven bootstrap-based estimator that requires minimal assumptions about the unrestricted population, called the Recursive Alpha (RAlph) coefficient, is therefore proposed. Based on the simulation studies, the two versions of the Ralph coefficient perform best when the information associated to the range restriction is strongly correlated with the characteristic being measured, and when the true reliability coefficient Alpha is high. Also, the RAlph coefficients are found to be effective in minimizing the error in estimating Alpha under strong presence of range restriction. Moreover, considerations on the length of the instrument, scale of the responses, and sample size aid in minimizing the error of the proposed coefficients. In support of the simulation results, an empirical study using behavioral data on social media users is carried out, and evidently, the RAlph coefficients are far better than the ordinary Cronbach’s Alpha estimate.
Keywords: Range Restriction, Adjusted Cronbach’s Alpha, Bootstrap Sampling
Year: 2015 Vol.: 64 No.: 2
Record ID: 75 [ Page 2 of 4, No. 36 ]
Authors: Michael Daniel C. Lucagbo; Lianne S. De La Cruz; Jecca V. Narvasa; Micah Jane A. Paglicawan
Abstract:
Efforts to bring down the incidence of crimes have been intensified by the Philippine National Police (PNP). The index crimes are prioritized among these crimes. These crimes include theft, robbery, carnapping, and motornapping. Interventions to bring down the incidence of crimes have recently been enacted by the National Capital Region Police Office (NCRPO) of the PNP. These interventions include increases in number of police personnel, mobile patrols, beat patrols, and checkpoints. In this study, the effect of each of these interventions is examined in a panel data analysis using weekly data gathered from all of the police stations in NCR. This paper performs a district-level analysis of the crimes and interventions. The negative binomial regression model for panel data is used to quantify the effects of the interventions on the incidence of index crimes. Results show that some (but not all) of these interventions are effective in reducing crime. The results also show differences in the effects of the interventions across for the different districts. Resources should thus be redirected towards these effective strategies. The differences in the effects of the interventions among the different crimes are also studied.
Keywords: index crimes, intervention, panel data, negative binomial regression
Year: 2015 Vol.: 64 No.: 2
Record ID: 74 [ Page 2 of 4, No. 37 ]
Authors: Catherine Estiaga
Abstract:
Penalty analysis is a popular method used to evaluate data from sensory evaluation using the Just About Right Scale and the Hedonic Scale. Although the test estimates the mean drops for the “Too Little” and “Too Much” categories of product attributes, penalty analysis does not provide information that can be used to test the effect of each attribute on the overall liking score. Bootstrap resampling method when used together with penalty analysis estimates the standard error of the mean drops and allows to test for the significance. This method is used in product testing of pizza products.
Keywords: bootstrap method, penalty analysis, just about right scale, hedonic scale, mean drops
Year: 2015 Vol.: 64 No.: 2
Record ID: 73 [ Page 2 of 4, No. 38 ]
Authors: John Closter F. Olivo
Abstract:
Several alternative statistical procedures have been suggested and published to statistically analyze the incidence of micronucleated polychromatic erythrocytes (MNPCs) among treatment groups, but no standard procedure has been singled out and exclusively recommended. In this study, the potential of TO2 to induce chromosomal damage is tested using both Poisson and quasi-Poisson models for the statistical evaluation of in vivo micronucleus (MN) assay. The genotoxic activity of T02 is assessed in the rodent bone marrow micronucleus test using male mice. Results show that MN frequencies are significantly elevated in mice exposed to any dose level of T02 administered orally in a single frequency of dose. Moreover, results indicate that T02 is tested to be a positive compound under the anticipated condition of the tests used.
Keywords: in vivo; micronucleus, MNPC; Poisson model; quasi-Poisson model; TO2
Year: 2015 Vol.: 64 No.: 2
Record ID: 72 [ Page 2 of 4, No. 39 ]
Authors: Stephen Jun V. Villejo
Abstract:
Recent unpredictable and extreme weather episodes and infestation are some of the realistic occurrences of structural change in the agricultural system which produce outliers and extreme values in our data, and consequently pose problems when building statistical models. An estimation procedure which is robust to structural change is therefore necessary. Three spatial-temporal models with varying dynamic characteristics of the parameters are postulated each with a different estimation procedure for the agricultural yield in irrigated areas of the Philippines. One of which is a robust estimation procedure using forward search algorithm with bootstrap in a backfitting algorithm. The other two algorithms also used the backfitting algorithm but infused with the Cochranne-Orcutt procedure. The robust estimation procedure and the other one which considers varying parameter across space gave competitive predictive abilities and are better than the ordinary linear model. Simulation studies show the superiority of the robust estimation procedure over the Cochranne-Orcutt procedure and ordinary linear model in the presence of structural change.
Keywords: Spatial-temporal model; Backfitting algorithm; robust estimation; additive model
Year: 2015 Vol.: 64 No.: 2
Record ID: 71 [ Page 2 of 4, No. 40 ]
Authors: Robert Neil F. Leong; Frumencio F. Co; Daniel Stanley Y. Tan
Abstract:
One of the main areas of public health surveillance is infectious disease surveillance. With infectious disease backgrounds usually being more complex, appropriate surveillance schemes must be in order. One such procedure is through the use of control charts. However, with most background processes following a zero-inflated Poisson (ZIP) distribution as brought about by the extra variability due to excess zeros, the control charting procedures must be properly developed to address this issue. Hence in this paper, drawing inspiration from the development of combined control charting procedures for simultaneously monitoring each ZIP parameter individually in the context of statistical process control (SPC), several combined exponentially weighted moving average (EWMA) control charting procedures were proposed (Bernoulli-ZIP and CRL-ZTP EWMA charts). Through an extensive simulation study involving multiple parameter settings and outbreak model considerations (i.e., different shapes, magnitude, and duration), some key results were observed. These include the applicability of performing combined control charting procedures for disease surveillance with a ZIP background using EWMA techniques. For demonstration purposes, application with an actual data, using confirmed measles cases in the National Capital Region (NCR) from Jan. 1, 2010 to Jan. 14, 2015, revealed the comparability of the Bernoulli-ZIP EWMA scheme to historical limits method currently in use.
Keywords: EWMA control charts, disease surveillance, ZIP distribution, measles
Year: 2015 Vol.: 64 No.: 2
Record ID: 70 [ Page 2 of 4, No. 41 ]
Authors: Abubakar S. Asaad; Erniel B. Barrios
Abstract:
The assumptions of constant characteristics across spatial locations and constant characteristics across time points facilitates estimation in a multivariate spatial-temporal model. A test based on the nonparametric bootstrap in proposed to verify these assumptions. The simulation studies confirm that the proposed test procedures are powerful and correctly sized.
Keywords: coverage probability, robustness, spatial-temporal model
Year: 2015 Vol.: 64 No.: 2
Record ID: 69 [ Page 2 of 4, No. 42 ]
Authors: Nabendu Pal; Suntaree Unhapipat
Abstract:
Availability of latest fast and affordable computing resources has empowered the statisticians tremendously. This has also given the applied researchers a unique edge to extend the frontier of their knowledge-base by taking advantage of sophisticated computational statistical tools where theoretical derivations of complex sampling distributions are often not required or can be bypassed. ‘Bootstrap method’ is one such tool which is being used widely in solving real-life problems that involve statistical inferences. This article is designed to present bootstrap in simple terms for the applied researchers with useful examples and show how it can go a long way in settling contentious issues with reasonably convincing results.
Keywords: Sampling distribution, p-value, nonparametric bootstrap, parametric bootstrap, test statistic.
Year: 2015 Vol.: 64 No.: 1
Record ID: 68 [ Page 2 of 4, No. 43 ]
Authors: James Roldan S. Reyes; Zita VJ. Albacea
Abstract:
This paper presents an alternative method apart from the current online or electronic approach, which is currently being used by some higher education institutions (HEIs), in administering student ratings for teachers. The developed method still employed the traditional paper approach but has been improved through the use of sampling application which includes sampling design, sample size, estimation technique, and strategic implementation. Three basic sampling designs such as simple random, stratified random, and cluster sampling were applied at three different sampling rates such as 25%, 50%, and 75%. For the empirical evaluation of the developed method, the Student Evaluation of Teachers (SET) of the University of the Philippines Los Baños (UPLB) was utilized using bootstrap resampling technique. Based on findings, stratified random sampling is the most appropriate sampling design to use with 50% of the students for each class section serving as SET evaluators. Results also revealed that bootstrap estimates of standard error are lower than that of the standard error using jackknife resampling procedure. Generally, the improved traditional paper approach same with the electronic approach could reduce the cost of administering student ratings. However, the electronic approach has a dilemma with regards to high non-response bias leading to invalid results. Thus, to minimize non-response error of the developed method, its standard protocol to administer the student ratings has been formulated.
Keywords: student ratings, traditional paper approach, sampling application, bootstrap resampling, jackknife resampling, non-response error
Year: 2015 Vol.: 64 No.: 1
Record ID: 67 [ Page 2 of 4, No. 44 ]
Authors: John D. Eustaquio; Dennis S. Mapa; Miguel C. Mindanao; Nino I. Paz
Abstract:
Hedging strategies have become more and more complicated as assets being traded have become more interrelated to each other. Thus, the estimation of risks for optimal hedging does not involve only the quantification of individual volatilities but also include their pairwise correlations. Therefore a model to capture the dynamic relationships is necessary to estimate and forecast correlations of returns through time. Engle'ss dynamic conditional correlation (DCC) model is compared with other models of correlation. Performance of the correlation models are evaluated in this paper using only the daily log returns of the closing prices from January, 2000 to February, 2010 of the Peso-Dollar Exchange Rate and Philippine Stock Exchange index. Ultimately, Engle's DCC model is adopted because of its consistency with expectations. Though generally negative, correlation between these two returns is not really constant as the results indicated. The forecast evaluation of the models was divided into in-sample and out-of-sample forecast performance with short-term (i.e., 22-day, 60-day, and 125-day) and medium-term (250-day and 500-day) rolling window correlations, or realized correlations, as proxies for the actual correlation. Based on the root mean squared error and mean absolute error, the integrated DCC model showed optimal forecast performance for the in-sample correlation patterns while the mean-reverting DCC model had the most desirable forecast properties for dynamic long-run forecasts. Also, the Diebold-Mariano tests showed that the integrated DCC has greater predictive accuracy in terms of the 3-month realized correlations than the rest of the models.
Keywords: dynamic conditional correlation, Peso-Dollar exchange rate, PSE index, hedging
Year: 2015 Vol.: 64 No.: 1
Record ID: 66 [ Page 2 of 4, No. 45 ]
Authors: Stephen Jun V. Villejo
Abstract:
This paper investigates suicidal tendencies of youth in the Philippines based on the Young Adult Fertility and Sexuality Study (YAFS) 2002. The main goal of the paper is the classification and prediction of suicidal tendencies using classification algorithms. The different classification algorithms such as Classification and Regression Trees, random forests and conditional inference trees; and the logistic regression have consistent findings on the significant variables affecting suicidal tendencies. Due to the severely unbalanced classes of the response variable, the classification models have very poor predictive ability for the minority class although the over-all classification rate is high. A classification algorithm is proposed which improves the predictive ability in terms of balancing out the correct classification in the two classes of the response variable.
Keywords: classification, suicide, prediction, logistic regression
Year: 2015 Vol.: 64 No.: 1
Record ID: 65 [ Page 2 of 4, No. 46 ]
Authors: Karl Anton M. Retumban
Abstract:
Factors influencing credit card ownership were identified using the data from Global Financial Inclusion Index Database of The World Bank and the tree-based methods: CART, boosting, and bagging. The prediction accuracy of the methods was compared in terms of the training and test error rate. Results on the world and Philippine data were compared. The factors influencing consumers to own a credit card are financial account ownership, highest educational attainment and age. This is the case both for the World and Philippine data. For the World data, the factors that influence credit card ownership are financial account ownership, debit card ownership, withdrawal frequency in personal account, highest educational attainment, current loan for home or apartment purchase, age, get cash in ATM and deposit cash in ATM. For the Philippine data, the influential factors to Filipino consumers are financial account ownership, age, income quintile, highest educational attainment, and deposit cash over the counter in branch of bank or financial institution. Among the procedures, boosting has the smallest test error rate while bagging has the largest training and test error rate, both for the world and Philippine data. CART and boosting has the smallest training error rate under the world data and Philippine data respectively.
Keywords: classification and regression trees, boosting, bagging, credit card ownership, Global Financial Inclusion Index Database
Year: 2015 Vol.: 64 No.: 1
Record ID: 64 [ Page 2 of 4, No. 47 ]
Authors: Michael Daniel C. Lucagbo
Abstract:
Socioeconomic classification (SEC) is an important construct to enable one to capture and understand changes in the structure of a society. The 1SEC 2012, a new scheme for identifying the SEC of Philippine households, predicts SEC using information on household characteristics through the ordinal logistic regression model. This study aims to improve the predictive ability of the 1SEC methodology by using state-of-the-art statistical learning techniques: discriminant analysis, support vector machines (SVM), and artificial neural networks (ANN), and thereby suggest a new scheme for predicting SEC. The results show that SVM and ANN exhibit improvements in exact-cluster prediction performance, suggesting alternative methods for predicting SEC.
Keywords: socioeconomic classification, ordinal logistic regression, discriminant analysis, support vector machines, artificial neural networks
Year: 2015 Vol.: 64 No.: 1
Record ID: 63 [ Page 2 of 4, No. 48 ]
Authors: Stephen Jun Villejo; Mark Tristan Enriquez; Michael Joseph Melendres; Dexter Eric Tan; Peter Julian Cayton
Abstract:
The government has instituted projects aimed at helping the poor, and has implemented mechanisms to make the services accessible to them. The wisdom of the projects of the government should not be defeated by misidentification of deserving households to enjoy those projects which could be remedied through proper and thorough assessment of their economic status.The study aims to provide a methodology and model for classifying households using demographic and household assets that may be used in identifying recipients of poverty-targeted projects. Cluster analysis was employed to identify household classification using income data from the Family Income and Expenditure Survey 2009. Five income clusters were identified. To study the relationship between the income classes and several predictors of income identified from previous researches, a family of logistic regression models have been utilized, culminating to the generalized logistic regression model. Nine significant predictors were included in the final reduced model. The model is assessed to have good fit via multiple Hosmer and Lemeshow tests. These variables were the following: location of the household whether in NCR or not, or in urban or rural area; education and employment status of the household head; number of cars, air-conditioners, and television sets; and the building type and household type. The sensitivity table suggests that the model is biased towards predicting the lower income classes. The research has identified a viable methodology for classification of income classes for households.
Keywords: mutlinomial logistic model, income determinants, clustering methods
Year: 2014 Vol.: 63 No.: 2
Record ID: 62 [ Page 2 of 4, No. 49 ]
Authors: Lisa Grace S. Bersales; Michael Daniel C. Lucagbo
Abstract:
In the Philippines, the National Wages and Productivity Commission (NWPC) formulates policies and guidelines that Tripartite Wage and Productivity Boards use in determining minimum wages in their respective regions. Reviews of the implementation of the minimum wage determination have been done in past studies to determine which of the factors listed by NWPC for consideration by the wage boards are actually used to determine minimum wage. Results indicated that the significant determinant of minimum wage is consumer price index. Two stage least squares estimation of a Fixed Effects Model for Panel Data for the period 1990-2012 showed that significant determinants of regional minimum wage for non-agriculture are: Consumer Price Index, Gross Regional Domestic Product, and April employment rate. The lower and upper estimates from the estimated equation of the Fixed Effects Model for Panel Data may provide intervals that the wage boards can use in making the final determination of minimum wage. The following shocks which would likely introduce abnormal wage setting behavior on the part of the wage boards were not significant: 1997-1998 - Asian Financial Crisis; 2002 - spillover effects from U.S. technology bubble burst; 2008-2009 - spillover effects from Global Financial Crisis.
Keywords: tripartite wage and productivity boards, minimum wage, fixed effects models for panel data, shocks, two stage least squares, fixed effects model for panel data
Year: 2014 Vol.: 63 No.: 2
Record ID: 61 [ Page 2 of 4, No. 50 ]
Authors: Michael Daniel C. Lucagbo; Genica Peye C. Alcaraz; Kristina Norma B. Cobrador; Elaine Japitana; Gelli Anne Q. Sadsad
Abstract:
The growing population of the Philippines hinders the country from achieving economic development due to the limited resources available. The 2010 Census on Population and Housing (CPH) reports that the Philippine population has struck 92.1 million, a 15.8-million increase from the 76.3 million population size reported in 2000. Moreover, the relationship between population and family size, on the one hand, and poverty incidence on the other, has been established through econometric models showing the causality between presence of young dependents in a household and household welfare. Using the Family Income and Expenditure Survey (FIES) 2009 data, this study examines the factors affecting the number of young dependents in a household, and focuses in particular on the household’s level of contraceptive expenditure. The negative binomial regression model is used to quantify the effect of the factors and predict the average number of young dependents in a household. This model allows for overdispersion in the data. Results show that for every P10,000 increase in total expenditure on contraceptives for a period of six months, the mean number of young dependents decreases by 3.7%. Other demographic variables such as education of household head and income of the household are controlled for in the study.
Keywords: Young dependents, contraceptive expenditure, negative binomial regression, overdispersion
Year: 2014 Vol.: 63 No.: 2