Printer Friendly Version | Back

No. of records per page: 10 | 20 | 30 | 50 | 100 | Show all
Select a Page:  1 2 Next >>

Record ID: 160    [ Page 1 of 2, No. 1 ]

Modelling Portfolio Risk and Diversification Effects of a Portfolio Using the Exponential Distribution – Bivariate Archimedean Gumbel Copula Model

Authors: Owen Jakata and Delson Chikobvu

Abstract:

This study uses the Archimedean Gumbel copula model to construct the dependence structure and joint probability distributions using the Exponential Distribution as the marginal distribution to asset returns. The main objective of this study is to estimate the diversification effects of investing in a portfolio consisting of two financial assets, viz: the South African Industrial and Financial Indices. The Exponential Distribution is used as the marginal distribution of the returns, instead of the Normal distribution, to better characterise the financial returns of the two assets. The scatterplots indicate that the dependence in gains, as well as the losses are better captured using the Archimedean Gumbel copula. Monte Carlo simulation of an equally weighted portfolio of the two financial assets is used to model and quantify the risk of the resultant portfolio. The results confirm that there are benefits in diversification, since the riskiness of the portfolio is less than the sum of the risk of the two financial assets. It is less risky to invest in diversified portfolios that includes assets from the two different industries/stock markets. Due to dependence and contagion between Global stock markets, the findings of this study are useful information for the local and international investors seeking a portfolio which include developing countries’ stock market Indices containing, say the South African financial assets. This study provides investors with a framework to quantify diversification effects, which allows for the avoidance of extreme risks, whilst benefiting from extreme gains.

Keywords: Expected Shortfall. Monte Carlo simulation, Value-at-Risk

Download this article:

Year: 2023       Vol.: 72       No.: 1      


Record ID: 159    [ Page 1 of 2, No. 2 ]

Local Quadratic Regression: Maximizing Performance via a Modified PRESS** for Bandwidths Selection

Authors: E. Edionwe and O. Eguasa

Abstract:

In the application of nonparametric regression model, it is a well-established fact that the bandwidth - also called smoothing parameter- is the single most crucial parameter that determines the quality of the estimated responses that are obtained from the regression procedure, and that its choice (how small or large the size) is hugely influenced by the criterion that is applied for its selection. Under small-sample settings, which is typical of response surface studies, the penalized Prediction Error Sum of Squares (PRESS**) criterion is recommended for selecting this all-important parameter. However, for the purpose of selecting bandwidths of improved statistical properties, we propose a modified version of the PRESS** criterion specifically for Local Quadratic Regression (LQR) model. Results from simulated data as well as those from two popular problems from the literature show that LQR procedure that utilizes the bandwidths selected via the proposed modified criterion performs outstandingly better than its counterpart that utilizes bandwidths selected via PRESS** criterion.

Keywords: Desirability Function, Hat matrix, Penalized Prediction Error Sum of Squares, Response Surface Methodology

Download this article:

Year: 2023       Vol.: 72       No.: 1      


Record ID: 158    [ Page 1 of 2, No. 3 ]

Spatiotemporal Patterns of COVID-19 Cases in Quezon City, Philippines

Authors: Tricia Janylle B. Sta. Maria, Nancy E. Añez-Tandang, and Edrun R. Gayosa

Abstract:

Various studies have been undertaken to explore the spatial characteristics of the COVID-19 pandemic. However, only a few have considered the pandemic's temporal characteristics to assess space-time dynamics. This study focuses on COVID-19 spatiotemporal patterns in Quezon City, Philippines from November 2020 to October 2021. Spatial clustering and spatiotemporal patterns were analyzed based on a space-time cube (STC). Results showed that hot spots and cold spots were found in the city's northern and southern parts, respectively. Also, a significant increasing pattern was revealed throughout the study period. Moreover, STC analysis demonstrated that intensifying hot spots or locations that were statistically significant hot spots for 90% of the study period and the intensity of clustering of high counts of COVID-19 cases is significantly increasing overall, was primarily concentrated in the center and northern regions of Quezon City, where the majority of the barangays in Districts 2, 5, and 6 are located. Barangays identified with this pattern were Bagong Silangan, Batasan Hills, Commonwealth, Holy Spirit, Payatas, Matandang Balara, Pasong Putik Proper, Fairview, Pasong Tamo, and Sauyo. As there is a possible resurgence in COVID-19 cases, identifying spatiotemporal trends and clustering patterns is vital for regulating and controlling COVID-19's spread. Thus, the study's findings and methods can be utilized to predict and manage epidemics and help decision-makers control existing and future outbreaks.

Keywords: spatial clustering, spatiotemporal analysis, space-time cube, coronavirus

Download this article:

Year: 2023       Vol.: 72       No.: 1      


Record ID: 157    [ Page 1 of 2, No. 4 ]

Utilization of Machine Learning, Government-Based and Non-Conventional Indicators for Property Value Prediction in the Philippines

Authors: Gabriel Isaac L. Ramolete, Bryan Bramaskara, Dustin A. Reyes, and Adrienne Heinrich

Abstract:

Property appraisal and value estimation in the Philippines are prone to human errors and bias, due to price subjectivity and the general difficulty in properly quantifying the impact of factors beyond the property itself. Predictive models for property valuation typically involve conventional features of the house (e.g., number of bathrooms) and market prices of nearby properties. This paper investigates the value of incorporating alternative data to account for deviations in true market value and improve property value predictions in the Philippines and other developing countries. The study considers public data and anchors socio-economic indicators to assess its relevance to property value prediction in the Philippines. By utilizing the Department of Trade and Industry’s 2021 National Competitiveness Index Rating, this research also investigates the significance of a Local Government Unit’s competitiveness based on their economic dynamism, government efficiency, infrastructure, and resiliency. Different commonly used Machine Learning (ML) methods and features from various data sources are compared and it is found that the inclusion of government indicators has substantial positive effect on the model performance on top of conventional indicators that can be globally replicated. A Mean Average Percentage Error (MAPE) of 10.7-21% is obtained which is competitive compared to the performance ranges of other reported models. A property segment (personalized) approach is proposed to achieve lower error rates in Philippine appraisal (in 87.5% of cases), better access and transparency for populations outside the real estate network, and minimally biased assessments, all of which are also relevant for other developing countries.

Keywords: property appraisal, spatial analysis, city competitiveness, clustering

Download this article:

Year: 2023       Vol.: 72       No.: 1      


Record ID: 156    [ Page 1 of 2, No. 5 ]

Verification of Coffee Product Form and Determination of Conversion Rate From Coffee Dried Berries to Green Coffee Beans (GCB)

Authors: Dennis S. Mapa, Ph.D., Divina Gracia L. Del Prado, Ph.D., Vivian R. Ilarina, Rachel C. Lacsa, Manuela S. Nalugon, Abella A. Regala, Marivic C. De Luna, and Ray Francis B. De Castro

Abstract:

The Philippine Statistics Authority (PSA) collects, generates, and releases production data on coffee in the form of dried berries from the results of the Crops Production Survey. In the computation of the Supply Utilization Accounts and Food Balance Sheet for coffee, the coffee form used is green coffee beans (GCB), using a conversion rate of 28 percent from dried berries to GCB. However, the Food and Agriculture Organization of the United Nations (FAO) and International Coffee Organization (ICO) use a 50 percent conversion rate from dried berries to GCB. The common form of coffee traded by farmers and the conversion rate from dried berries to GCB were investigated through consultations with traders, processors, and other stakeholders; and surveys with coffee farmers and traders as respondents. The results of this study show that the common form of coffee traded by farmers is GCB, and the average conversion rate from dried berries to GCB is 50 percent.

Keywords: fresh berries, field visits, survey, processors, experiment, percent recovery

Download this article:

Year: 2022       Vol.: 71       No.: 2      


Record ID: 155    [ Page 1 of 2, No. 6 ]

Determination of Dry Rubber Content of Rubber Cup Lump

Authors: Claire Nova O. Abdulatip and Honey Fe G. Boje

Abstract:

The quality of latex from rubber trees is determined by the amount of Dry Rubber Content (DRC). Mainly, the price of the cup lumps is directly dependent on the DRC, commonly determined through visual observation by rubber dealers. Thus, no standard method is used by rubber buyers in the industry for farm gate determination of cup lump DRC. But the Philippines used a 25% conversion rate from cup lumps to dry rubber; however, other member countries used 50%. Therefore, this paper addresses the conversion rate of dry rubber content of rubber cup lumps in the Philippines. “On-site validation” was conducted by collecting data from selected rubber processing plants in major rubber producing provinces, namely: Zamboanga Sibugay, North Cotabato, and Bukidnon. Secondary data, such as cup lumps and crumb rubber volume from 2017 to 2019, were collected and analyzed using Percentage Ratio Comparison. Results indicated that the DRC of cup lumps to dry rubber was more than 50%.

Keywords: Rubber, Conversion Rate, On-Site Validation

Download this article:

Year: 2022       Vol.: 71       No.: 2      


Record ID: 154    [ Page 1 of 2, No. 7 ]

Topic Identification and Classification of GooglePlay Store Reviews

Authors: Daniel David M. Pamplona

Abstract:

Digital distribution platforms, such as Google®3 Play Store, contain an enormous quantity of information related to app data and user reviews. A particularly challenging task is to classify a large unstructured dataset into smaller clusters or topics. With this, data from 19,886 user reviews was extracted from Google Play Store. The main task is to determine app characteristics, though common themes, that are commonly mentioned in positive and negative reviews. Text data was preprocessed and then common topics were identified using LDA for positive reviews and negative reviews. The accuracy of topics was assessed using perplexity-based approach and human interpretation. To further validate the topic model, the topic assignment was used as the outcome variable in Naive Bayes model with reviews as input. Empirical results show that the extracted topics can be predicted well using text reviews. Finally, the distribution of topics was calculated according to different app categories.

Keywords: Topic Modeling, Latent Dirichlet Allocation, Naive Bayes Classifier, Perplexity

Download this article:

Year: 2022       Vol.: 71       No.: 2      


Record ID: 153    [ Page 1 of 2, No. 8 ]

A Bayesian Hierarchical Model for COVID-19 Cases in Mindanao Philippines

Authors: Jejemae D. Nacion and Bernadette F. Tubo

Abstract:

A Bayesian hierarchical modelling approach is utilized to nowcast COVID-19 cases in Mindanao, Philippines for the year 2020 to 2021. A spatio-temporal model is considered and the proposed methodology explores the possibility of a flexible way of correcting the time and space delayed reports of the COVID-19 cases for a duration of 4 weeks for the 27 provinces in Mindanao via a Bayesian approach. The goal of the modelling approach is to include parameters that will correct reporting delays in the dataset and derive a model using the Integrated Nested Laplace Approximation (INLA). The study shows that the proposed model was able to capture the increasing trend of the COVID-19 disease counts, that is, the prediction counts derived are closer to the true count compared to the currently reported counts of COVID-19 cases which showed a decreasing behavior. The ability of the proposed model to nowcast statistically significant estimates, particularly, for epidemic counts of COVD-19 in the presence of report delays may aid health authorities to have effective control measures and issuance of warnings to the public.

Keywords: Bayesian inference, spatio-temporal model, reporting delay, nowcasting

Download this article:

Year: 2022       Vol.: 71       No.: 2      


Record ID: 152    [ Page 1 of 2, No. 9 ]

Hierarchical Bayesian Model for Correcting Reporting Delays in Dengue Counts

Authors: Mikee T. Demecillo and Bernadette F. Tubo

Abstract:

Real-time surveillance and precise case estimation are necessary for situational awareness in order to spot trends and outbreaks and establish efficient control actions. The comprehension of the mechanisms of a sudden rise or fall in disease cases that change over time is hampered by the reporting delays between disease start and case reporting. This study uses a flexible temporal nowcasting model with a Bayesian inference for latent Gaussian models built in R-INLA to rectify reporting delays for weekly dengue surveillance data in Northern Mindanao from 2009 to 2010. Additionally, it seeks to quantify all the uncertainties involved in replacing the missing value. The statistical issue is to forecast run-off triangle numbers based on actual counts ????????!,#. In contrast to the currently reported instances, which seem to be declining, the posterior predictive model on thegiven temporal dataset recognizes the fact that there are more dengue cases than there were previously (supporting the actual scenario). This implies that even with delayed data, the model was still able to provide a reliable estimate of the true number of instances. This paper offers a model for nowcasting to aid in dengue control and good judgment on the part of interested authorities.

Keywords: Latent Gaussian Model, Nowcast, Count Data

Download this article:

Year: 2022       Vol.: 71       No.: 2      


Record ID: 151    [ Page 1 of 2, No. 10 ]

Estimating the Magnitude of the Poor Households in Metro Manila Using the Poisson Regression Model

Authors: Bernadette B. Balamban, Anna Jean C. Pascasio, Driesch Lucien R. Cortel and Maxine R. Ridulme

Abstract:

In the Official Poverty Statistics, Metro Manila, also known as the National Capital Region (NCR), is one of the areas that belong to the least poor cluster – a cluster that has relatively low poverty incidences. The Philippine Statistics Authority released the 2018 Municipal and City Level Poverty Estimates using the Elbers, Lajouw, and Lanjouw (ELL) methodology. The 2018 Small Area Estimates (SAE) of Poverty also released estimates for the 14 sub-areas in Metro Manila, ranging from 1.5 percent to 6.5 percent. The city-level poverty estimates were released as official statistics using direct estimation technique. Given the relatively low poverty incidences of the region, this paper aims to estimate the 2018 poverty incidence for the legislative districts in NCR, including the 14 sub-areas of the City of Manila, using the Poisson regression methodology and to compare with the results of the ELL methodology. Data sources include the 2015 Census of Population, and the merged 2018 Family Income and Expenditure Survey and January 2019 round of the Labor Force Survey. A total of 5 significant indicators were included in the final model. Results show that the Poisson model produced more reliable estimates for NCR than the ELL methodology. These SAE techniques allow for generating more granular poverty statistics useful for targeting poor beneficiaries. Furthermore, information may provide an opportunity for the LGU to act swiftly and provide appropriate subsidies for areas within Metro Manila.

Keywords: poverty statistics, small area estimation, survey and census data, time invariance

Download this article:

Year: 2022       Vol.: 71       No.: 2      


Record ID: 150    [ Page 1 of 2, No. 11 ]

Application of Consecutive Sampling Technique in a Clinical Survey for an Ordered Population: Does it Generate Accurate Statistics?

Authors: Mohamad Adam Bujang, Tg Mohd Ikhwan Tg Abu Bakar Sidik, and Nadiah Sa'at

Abstract:

This study aims to compare the statistical generalizations which are inferred from samples obtained by both systematic sampling and consecutive sampling and then compare both their results with the true population parameters of the target population. This study was conducted using two approaches. The first approach was a comparison between sample statistics and population parameters based on a simulation analysis to estimate the population parameters from three types of statistical distributions (i.e. Normal, Exponential, and Poisson) by using seven sub-samples and 1000 iterations. The second approach was a comparison between sample statistics and population parameters based on real-life data sets which comprise six sub-samples and four parameters. Based on results from the simulation analysis, systematic sampling offers a greater advantage by having a smaller value of mean square error (MSE) in 40 out of 70 comparisons (57.1%) while consecutive sampling has a smaller value of MSE in 29 out of 70 comparisons (41.4%). There is only one MSE comparison that was identical between systematic sampling and consecutive sampling. Based on a validation approach, systematic sampling produced more accurate statistics than consecutive sampling with six out of eight comparisons. In summary, systematic sampling offers a better advantage in terms of accuracy. However, consecutive sampling is still able to generate valid and accurate statistics despite the fact that it is a type of non-probability sampling, especially if a sufficiently large sample size has been obtained for statistical analysis. Therefore, it is recommended that in any situation when it can be difficult to apply a systematic sampling technique for a particular clinical setting, researchers may opt to apply the consecutive technique in the recruitment process as an alternative, with a limitation on making generalizations about the target population.

Keywords: population parameters; sample statistics; systematic sampling.

Download this article:

Year: 2022       Vol.: 71       No.: 1      


Record ID: 149    [ Page 1 of 2, No. 12 ]

On Some Efficient Classes of Estimators Based on Higher Order Moments of an Auxiliary Attribute

Authors: Shashi Bhushan and Anoop Kumar

Abstract:

This paper discusses the problem of estimating the population mean utilizing information on the mean and variance of qualitative characteristics. We introduce some efficient classes of estimators based on higher order moments such as the variance of an auxiliary attribute. The conventional mean estimator, Bhushan and Gupta (2016) estimator, and the traditional regression and ratio estimators proposed by Naik and Gupta (1996) are shown to be the sub-class of the proposed estimators for properly chosen valuations of the described scalars. The effective performance of the suggested estimators has been assessed empirically and theoretically with respect to the contemporary estimators.

Keywords: mean square error, efficiency, qualitative characteristics

Download this article:

Year: 2022       Vol.: 71       No.: 1      


Record ID: 148    [ Page 1 of 2, No. 13 ]

An Application of CATANOVA and Logistic Regression on the Most Prevalent Sexually Transmitted Infection (A Case Study of the University of Nigeria Teaching Hospital)

Authors: Nnaemeka Martin Eze, Oluchukwu Chukwuemeka Asogwa, Samson Offorma Ugwu, Chinonso Michael Eze, Felix Obi Ohanuba, Tobias Ejiofor Ugah

Abstract:

This research focused on the application of CATANOVA and logistic regression on the most prevalent Sexually Transmitted Infection (STI) reported in the University of Nigeria Teaching Hospital from 2010- 2020. A population of 20,704 patients was recorded to have contracted eight(8) selected STIs. Prevalence analysis was computed to determine the most prevalent STI. Two-way CATANOVA cross-classification was computed to ascertain the age group and gender that suffer more from the most prevalent STI. Three-way CATANOVA was computed to ascertain the association among drug prescription, age, and gender of the Gonorrhea patients. A logistic regression model was fitted to predict infertility as an effect of the most prevalent STI. The prevalence analysis showed Gonorrhea infection as the most prevalent STI at 33.08%. A population of 6,850 patients recorded to have contracted Gonorrhea infection from 2010-2020 was employed for the analysis. Two-way CATANOVA cross-classification showed that gender, age, and interaction effects were statistically significant at a 5% significance level. Male (3,752; 54.8%) suffers Gonorrhea infection more than female (3,098;45.2%) and aged 30-39 years (1,946; 28.4%) suffers it more than any other age interval. The interaction effect shows that the rate of contracting Gonorrhea infection by gender differs from one age interval to another. Three-way CATANOVA results showed that drugs prescribed for the treatment of Gonorrhea infection depend on gender and age. Logistic regression results showed that an increase in age, body mass index, blood pressure, blood sugar, bacteria quantity, and Gonorrhea history were associated with an increased likelihood of the Gonorrhea patient being infertile.

Keywords: Chi-square test, Prediction, Prevalence

Download this article:

Year: 2022       Vol.: 71       No.: 1      


Record ID: 147    [ Page 1 of 2, No. 14 ]

Analytic Hierarchy Process with Rasch Measurement in the Construction of a Composite Metric of Student Online Learning Readiness Scale

Authors: Joyce DL. Grajo, James Roldan S. Reyes1, Liza N. Comia, Lara Paul A. Ebal, Jared Jorim O. Mendoza, and Mara Sherlin DP. Talento

Abstract:

This paper developed the Online Learning Readiness Composite Scale (OLRCS), a composite measure of student online learning readiness based on five dimensions, namely (1) computer/internet self-efficacy; (2) self-directed learning; (3) learner control; (4) motivation for learning; and (5) online communication self-efficacy. A single metric of online learning readiness has its advantage over its disaggregated dimensions. For one, it allows a summative description of each student which school administrators can use for an effective student targeting toward flexible learning. Rasch Analysis (RA) was performed to come up with an objective measure for each dimension while Analytic Hierarchy Process (AHP) was applied to aggregate the computed Rasch scores of the five dimensions. Three OLRCS have been constructed using weights generated by (1) teacher participants, (2) student participants, and (3) combined student and teacher participants. Results showed that motivation for learning consistently received the highest weight while online communication self-efficacy and computer/internet selfefficacy got low weights among the three OLRCS. Research findings also showed that student participants gave more importance to learner control than self-directed learning, unlike the teacher participants. The difference in the teacher and student perspectives merits detailed attention to optimize the online learning environment and enable individual support. Nevertheless, using cluster analysis, the distribution of students who are ready, undecided, or not ready for online learning is similar to the three constructed OLRCS.

Keywords: multidimensional latent variable; multi-criteria decision analysis; linear aggregation

Download this article:

Year: 2022       Vol.: 71       No.: 1      


Record ID: 146    [ Page 1 of 2, No. 15 ]

Implementing an Effective Survey Operations for a Research and Development Survey in the Philippines

Authors: Ramoncito G. Cambel, Dalisay S. Maligalig, Maurice C. Borromeo, and Ronald R. Roldan Jr., and Clifford B. Lesmoras

Abstract:

Studies have shown that research and development (R&D) is a good driver of economic growth. Policies and programs that are based on good quality data are expected to produce better results. Hence, to formulate and implement policies and programs in R&D, good quality data is vital. A good data support system is also essential in identifying critical areas that need intervention, formulating viable approaches in addressing these issues, and allocating limited resources. In the Philippines, the Department of Science and Technology (DOST) has been conducting the Survey on Research and Development Expenditures and Personnel (R&D Survey) since 2003 so that R&D data and indicators can be compiled. To ensure that good quality R&D data and indicators are achieved, the DOST granted a research fund to the Institute of Statistics (INSTAT) of the University of the Philippines Los Baños (UPLB) in 2018 to further improve the design, conduct and analysis of the R&D Survey. This paper describes the processes that were developed and implemented through this research grant in relation to the dimensions of data quality, namely, relevance, accuracy, timeliness, accessibility, coherence, and comparability. Based on the evaluation of these processes, the paper also recommends further improvement on the survey operations of future rounds of the R&D Survey.

Keywords: Survey on Research and Development Expenditures and Personnel, data quality

Download this article:

Year: 2022       Vol.: 71       No.: 1      


Record ID: 145    [ Page 1 of 2, No. 16 ]

Analysis of Longitudinal Data with Missing Values in the Response and Covariates Using the Stochastic EM Algorithm

Authors: Ahmed M. Gad and Nesma M. Darwish

Abstract:

Longitudinal data are not uncommon in many disciplines where repeated measurements on a response variable are collected for each subject. Missing values are unavoidable in longitudinal studies. Missing values could be in the response variable, the covariates or in both. Dropout pattern occurs when some subjects leave the study prematurely. When the probability of missingness depends on the missing value, and may be on the observed values, the missing data mechanism is termed as non-random. Ignoring the missing values in this case leads to biased inferences. In this paper we will handle missing values in covariates using multiple imputations (MI) and the selection model to fit longitudinal data in the presence of nonrandom dropout. The stochastic EM (Expectation-Maximization) algorithm is developed to obtain the model parameter estimates. Also, parameter estimates of the dropout model have been obtained. Standard errors of estimates have been calculated using the developed Monte Carlo method. The proposed approach performance is evaluated through a simulation study. Also, the proposed approach is applied to a real data set.

Keywords: Interstitial Cystitis data; missing covariates; dropout missingness; multiple imputation; selection model; the SEM algorithm.

Download this article:

Year: 2022       Vol.: 71       No.: 1      


Record ID: 144    [ Page 1 of 2, No. 17 ]

Two New Tests for Tail Independence in Extreme Value Models

Authors: Mohammad Bolbolian Ghalibaf

Abstract:

This paper proposes two new tests for tail independence in extreme value models. We use the conditional distribution function (df) of X + Y, given that X + Y > c based approach of Falk and Michel to test for tail independence in extreme value models. We recommend using Cramervon Mises and Anderson-Darling tests for tail independence. Simulations show that the two tests are better than the Kolmogorov-Smirnov test which has good results among the proposed tests by Falk and Michel. Finally, by using two real datasets, we illustrate the application of the two proposed tests as well as the traditional tests of Falk and Michel.

Keywords: extreme value model, tail independence, Copula function, Cramer-von Mises test, Anderson-Darling test, Neyman- Pearson test, Kolmogorov-Smirnov test, Fisher’s ? test, Chi-square goodness-of-fit test

Download this article:

Year: 2021       Vol.: 70       No.: 2      


Record ID: 143    [ Page 1 of 2, No. 18 ]

Time Series Prediction of CO2 Emissions in Saudi Arabia Using ARIMA, GM(1,1), and NGBM(1,1) Models

Authors: Z. F. Althobaiti, and A. Shabri

Abstract:

The investigation of economic aspects of gas emissions in terms of its volume and consequences is very important, given the current increasing trend. Therefore, the prediction of carbon dioxide emissions in Saudi Arabia becomes necessary. This study uses annual time series data on CO2 emissions in Saudi Arabia from 1970 to 2016. The study built the prediction model of CO2 emissions in Saudi Arabia by using Autoregressive Integrated Moving Average (ARIMA), Grey System GM and Nonlinear Grey Bernoulli Model (NGBM), and comparing their efficiency and accuracy based on MAPE metric. The results revealed that Nonlinear Grey Bernoulli Model (NGBM) is more accurate than the other prediction models. The results may be useful to Saudi Arabian government in the development of its future economic policies. As a result, five policy recommendations have been proposed, each of which could play a significant role in the development of acceptable Saudi Arabian climate policies.

Keywords: annual time series data, Autoregressive Integrated Moving Average (ARIMA), CO2 emissions, global warming, Grey Model (GM), Nonlinear Grey Bernoulli Model (NGBM), prediction, Saudi Arabia

Download this article:

Year: 2021       Vol.: 70       No.: 2      


Record ID: 142    [ Page 1 of 2, No. 19 ]

Classes of Estimators under New Calibration Schemes using Non-conventional Measures of Dispersion

Authors: A. Audu, R. Singh, S. Khare, N. S. Dauran

Abstract:

In this paper, we proposed two classes of estimators under two new calibration schemes for a heterogeneous population by incorporating auxiliary information of Non-Conventional Measures of dispersion which are robust against the presence of outlier in the data.Theoretical results are supported by simulation studies conducted on six bivariate populations generated using exponential and normal distributions. The biases and percentage relative efficiencies (PRE) of the proposed and other related estimators in the study were computed and results indicated that the estimators proposed under suggested calibration schemes perform on average more efficiently than conventional unbiased estimator, Rao and Khan (2016) and Nidhi et al. (2017).

Keywords: heterogeneous population, Outliers, Estimators, Robust measures, Population mean

Download this article:

Year: 2021       Vol.: 70       No.: 2      


Record ID: 141    [ Page 1 of 2, No. 20 ]

A New Compound Probability Model Applicable to Count Data

Authors: Showkat Ahmad Dar, Anwar Hassan, Peer Bilal Ahmad and Bilal Ahmad Para

Abstract:

In this paper, we obtained a new model for count data by compounding of Poisson distribution with two parameter Pranav distribution. Important mathematical and statistical properties of the distribution have been derived and discussed. Then, parameter estimation is discussed using maximum likelihood method of estimation. Finally, real data set is analyzed to investigate the suitability of the proposed distribution in modeling count data.

Keywords: Poisson distribution, two parameter Pranav distribution, compound distribution, count data, simulation study, maximum likelihood estimation

Download this article:

Year: 2021       Vol.: 70       No.: 2      


Record ID: 140    [ Page 1 of 2, No. 21 ]

A Modified Ridge Estimator for the Logistic Regression Model

Authors: Mazin M. Alanaz, Nada Nazar Alobaidi and Zakariya Yahya Algamal

Abstract:

The ridge estimator has been consistently demonstrated to be an attractive shrinkage method to reduce the effects of multicollinearity. The logistic regression model is a well-known model in application when the response variable is binary data. However, it is known that multicollinearity negatively affects the variance of maximum likelihood estimator of the logistic regression coefficients. To address this problem, a logistic ridge regression model has been proposed by numerous researchers. In this paper, a modified logistic ridge estimator (MLRE) is proposed and derived. The idea behind the MLRE is to get diagonal matrix with small values of diagonal elements that leading to decrease the shrinkage parameter and, therefore, the resultant estimator can be better with small amount of bias. Our Monte Carlo simulation results suggest that the MLRE estimator can bring significant improvement relative to other existing estimators.

Keywords: multicollinearity, ridge estimator, logistic regression model, shrinkage, Monte Carlo simulation

Download this article:

Year: 2021       Vol.: 70       No.: 2      


Record ID: 139    [ Page 1 of 2, No. 22 ]

Modelling the Right-Tail Conditional Expectation and Variance of Various Philippine Stocks Return using the Class of Beta Generalized Pareto Distribution

Authors: Angelo E. Marasigan

Abstract:

A risk measure such as the value-at-risk (VaR) is commonly used by financial institution for capital management and calculation of amount of risk exposure against a loss. The incoherence of VaR leads to the calculation of conditional tail expectation (CTE) for remedy. In this study, formulas for the CTE and the conditional tail variance (CTV) under the class of beta generalized Pareto (bgP) distribution were derived. bgP is used to model the distribution of different scenarios of return of various Philippine stock indices using maximum likelihood estimation due to simulated annealing method with R software, and further used to compute the CTE and CTV of the data of returns according to the specified model. To determine the performance of bgP to model financial data sets, a comparison with its generating distributions which are the (generalized) beta and Pareto models were done. Finally, the method of historical simulation was also done and used to compute the corresponding VaR, CTE, and CTV for comparison to the above method of calculations.

Keywords: risk measure, Beta generalized Pareto distribution, value-atrisk, conditional tail expectation, conditional tail variance, heavy-tailed

Download this article:

Year: 2021       Vol.: 70       No.: 1      


Record ID: 138    [ Page 1 of 2, No. 23 ]

Development of an Alternative Municipal and City Level Competitiveness Index in the Philippines

Authors: Ramoncito G. Cambel and Zita VJ. Albacea

Abstract:

The Philippine Cities and Municipalities Competitiveness Index (CMCI) by the National Competitiveness Council (NCC) measures the local government unit’s competitiveness. This paper presents the results of a study that employed an alternative weighting method for the indicators through a statistical technique called principal component analysis and factor analysis. Moreover, the study utilized both national census data and administrative data from the government. Four factors were extracted from the available data. The four factors’ compositions are: “Housing and Household Characteristics, Financial Institutions and Capacity of Government for Health Services,” “Establishments for Tourists Accommodation,” Cost of Labor”, and “Capacity of Local Government to Deliver Services.” Unlike the distribution of the NCC’s 2016 competitiveness level of municipalities and cities which is symmetric, the distribution of the proposed alternative competitiveness index is positively skewed. This suggests that only a few municipalities and cities can be considered highly competitive. Moran’s I of 0.3457 proves that there exists positive spatial autocorrelation on the competitiveness level of Philippine municipalities and cities. Municipalities and cities in the Provinces of Bulacan, Pampanga, Cavite, Laguna, Rizal, and Cebu, and also in the National Capital Region are generally identified as those in the highhigh cluster in terms of the proposed alternative 2016 competitiveness level. In contrast, municipalities and cities in the Provinces of Mindoro, Romblon, and Surigao del Norte, and those in the Cordillera Autonomous Region, Ilocos Region, Cagayan Valley Region, and Eastern Visayas Region are generally considered as those in the low-low cluster in terms of the proposed alternative 2016 competitiveness level. Statistical properties of the index such as consistency, accuracy, and precision were then assessed using the bootstrap resampling technique. Results showed that the proposed alternative CMCI is said to be unbiased and mean square error consistent. This indicates the proposed CMCI could be used as an alternative Municipality and City Level Competitiveness Index for the Philippines. This index concentrated on four factors with 33 indicators. These factors are weighted in the index based on their importance in their contribution to the variability of the index.

Keywords: statistical index, spatial analysis, factor analysis

Download this article:

Year: 2021       Vol.: 70       No.: 1      


Record ID: 137    [ Page 1 of 2, No. 24 ]

Time Series Approach for Modelling the Merger and Acquisition Series: An Application to Indian Banking System

Authors: Varun Agiwal and Jitendra Kumar

Abstract:

In time series, present observation not only depend upon own past observation(s) but also involve other explanatory or exogenous variables. These variables are not continuously influence or impact for long run and may be removed or discontinued or merger and acquisition (M&A) because its effect may be reduced due to less significant correlation. The M&A theory is developed when one or more variables are not meet out the required circumstance to survive in the system. To analyze the performance of M&A concept, this study proposes a merged autoregressive (M-AR) model for examining the impact of merger into the parameters as well as acquired series. Bayesian approach is considered for parameter estimation under different loss functions and compared with least square estimator. To test the presence/association of merger series in the acquire series, Bayes factor, full Bayesian significance test and posterior probability based on credible interval are derived. A simulation study and an empirical application of banking indicators for Indian Banks are carried out to evaluate the performance of the proposed model. The study concludes that proposed time series models solved the problems of discontinuity in the series and also able to manage model statistically.

Keywords: autoregressive model, Bayesian inference, merger and acquisition series, Indian bank

Download this article:

Year: 2021       Vol.: 70       No.: 1      


Record ID: 136    [ Page 1 of 2, No. 25 ]

Application of U-statistics in Estimation of Scale Parameter of Bilal Distribution

Authors: R. Maya, M.R. Irshad, and S.P. Arun

Abstract:

In this work, we have considered a new lifetime distribution, the Bilal distribution and derived the best linear unbiased estimator (BLUE) of the scale parameter of the Bilal distribution based on order statistics. We further estimate the scale parameter of Bilal distribution by U-statistics based on best linear functions of order statistics as kernels. The efficiency of the proposed estimator relative to the minimum variance unbiased estimator (MVUE) have also been evaluated.

Keywords: Order statistics, Bilal distribution, best linear unbiased estimator, U-statistics

Download this article:

Year: 2021       Vol.: 70       No.: 1      


Record ID: 135    [ Page 1 of 2, No. 26 ]

Analysis of Randomized Clinical Trial in the Presence of Non-Compliance: Comparison of Causal Models

Authors: Ali Reza Soltanian, Hassan Ahmadinia, and Ghodratollah Roshanaei

Abstract:

Non-compliance is a common deviation from randomized clinical trials protocol. Standard approaches for comparing the effects of drugs in randomized clinical trials in the presence of non-compliance are intention-to-treat, as-treated and per-protocol analysis. Each of these approaches has disadvantages when evaluating the effect of medication in present of non-compliance. The current study compared the accuracy of instrumental variable (IV), intention-to-treat, as-treated and perprotocol technique. We assumed that non-compliance occurred for some patients in the new treatment group only, and independent of the patient outcomes. To compare these techniques, various scenarios were simulated. The MSE value for both PP and IV models changes only under the influence of the value of w (non-compliance ratio). That is, at all values of θ (treatment effect), the MSE of these two models increases with increasing non-compliance ratio, and changing the value of θ does not affect the MSE. The MSE value for the AT model if the non-compliance occurs only in the intervention group, this value changes only under the influence of the w value That is, in this case, in all values of θ, the MSE value increases with increasing non-compliance ratio, and changing the value of θ does not affect the MSE. But the MSE value of the ITT model is strongly influenced by the value of θ. At low θ values the MSE value of this model is lower than other methods and better estimates the therapeutic effect, and in this case with increasing the w, the MSE value increases very little. But as the θ increases, so does the MSE value, and in this case, as the w increases, the MSE value increases sharply.

Keywords: causal model, non-compliance, randomized clinical trials, simulation

Download this article:

Year: 2021       Vol.: 70       No.: 1      


Record ID: 134    [ Page 1 of 2, No. 27 ]

An Improved Class of Estimators of Population Mean under Simple Random Sampling

Authors: Shashi Bhushan, Anoop Kumar, and Saurabh Singh and Sumit Kumar

Abstract:

In this article, we consider an improved class of estimators of population mean using additional information under simple random sampling (SRS). The expressions of bias and mean square error of the proposed class of estimators are obtained up to first order of approximation. In addition, some well-known estimators have been identified as particular member of the proposed class of estimators. The theoretical results are established and empirical study has been carried out using real and simulated data sets. The findings appear to be rather satisfactory showing better improvement over the existing estimators.

Keywords: simple random sampling, mean square error, efficiency

Download this article:

Year: 2021       Vol.: 70       No.: 1      


Record ID: 133    [ Page 1 of 2, No. 28 ]

A Procedure for the Generation of Small Area Estimates of Philippine Poverty Incidence

Authors: Nelda A. Nacion and Arturo Y. Pacificador

Abstract:

The main purpose of this study is to propose an alternative procedure in the generation of small area estimates of poverty incidence using imputation-like procedures coupled with a calibration of estimates to ensure coherence in the regional estimates. Specifically, this study applied Deterministic Regression Approach, Stochastic Imputation-like procedure similar to Stochastic Regression, and applied the calibration techniques to ensure that the small area estimates conform to the known regional estimates. The difference of this methodology as compared to the ELL is that the error terms used for predictions is based on the empirical distribution of such residuals and thereby a protection against misspecification of the error model. At the same time, the procedure is simpler with available computing resources. In addition, the proposed methodology only utilized data from the census short form which is a 100% percent sample. Thus, eliminating another source of variation as compared to using the census long-form which is collected from a sample of households. This study used the Family Income and Expenditure Survey (FIES) of 2009 and the Census of Population and Housing (CPH form 2) 2010 to come up with reliable estimates of poverty incidence by municipal level. Since the CPH is conducted in the Philippines every 10 years, the CPH 2010 is the latest data that was used. The researcher was able to produce small area estimates of poverty in the Philippines at municipal level by combining survey data with auxiliary data derived from census. The study fitted different models for each region. Considering the results of this paper, the following conclusions were derived: The Stochastic Regression Imputation (SRI) is better to use as compared to Deterministic Regression Imputation (DRI) in attaching income to CPH. The SRI was able to preserve the distribution of 82% of the total number of regions. The DRI was able to preserve only 10.22% of the validation sets. Since the error in fitting the DRI in CPH does not follow a well-known distribution (such as the Normal distribution), the non-parametric way of estimating error was used to generate the errors attached in SRI. The technique is called Kernel Density Estimation (KDE) or the histogram method, which was found to be effective in using the SR. Using the calibration technique achieved municipal estimates that conforms to the regional estimates. The estimates of the poor households in CPH reflects the bottom 30% of the wealth index.

Keywords:

Download this article:

Year: 2021       Vol.: 70       No.: 1      


Record ID: 132    [ Page 1 of 2, No. 29 ]

Using Uncertainty and Sobol’ Sensitivity Analysis Techniques in the Evaluation of a Composite Provincial Level Food Security Index

Authors: Christian P. Umali and Felino P. Lansigan

Abstract:

A composite provincial level food security index can measure food security at the sub-national level which can be helpful in policy making. However, it can be non-robust due to the uncertainties involved in the choice of input factors to be used in its construction, namely: different sources of data, normalization methods, weighting schemes, aggregation systems, and the level of importance placed on the different dimensions of food security i.e. availability, accessibility, utilization, and stability. In this study, uncertainty analysis technique was employed in order to assess the robustness of the index constructed through the conventional approach. Sensitivity analysis was done in order to quantify the influence each factor has in the index building process, identify factors that should be prioritized and can be fixed in the successive development of the index, and determine which levels of factors are responsible for producing the desirable model outcomes. The study had been exploratory in considering the ratio with mean normalization technique and a combination of the additive and geometric aggregation methods as potential inputs in the index construction. In computing for the Sobol’ sensitivity indices, the formulas suggested by Jansen et al. (1994) and Nossent and Bauwens (2012) were investigated and compared under varying sample sizes. The results can provide a more appropriate choice of procedure in computing for the Sobol’ sensitivity indices at an optimum sample size, and likewise insights in the future development of a uniform and more defensible composite provincial level food security index.

Keywords: uncertainty analysis, sensitivity analysis, Sobol’ sensitivity index, Monte Carlo integral, composite index, food security

Download this article:

Year: 2020       Vol.: 69       No.: 2      


Record ID: 131    [ Page 1 of 2, No. 30 ]

Investigation of Factors Contributing to Indigenous Language Decline in Nigeria

Authors: N. A. Ikoba and E. T. Jolayemi

Abstract:

The declining fortunes of some of Nigeria’s indigenous languages are examined in this paper. A multi-dimensional indigenous language-use questionnaire was constructed to elicit data through a survey carried out in some Nigerian cities. The aim was to acquire relevant data on indigenous language ability and possible causes of language-use decline. The results from the survey showed that there is a low level of indigenous language literacy among most of the languages surveyed. The proportion of language use at home was also seen to be generally low for most of the surveyed languages, below the 70% threshold for virile languages. Several reasons were adduced for the non-transfer of indigenous language ability from parent to children and tests of statistical independence carried out showed that the respondents’ perception that their language is inferior to English, belief that the child will be limited in school, negligence and inability of parents to speak their heritage language were the major reasons adduced for the decline. A logistic regression analysis of the data also showed that acquisition of language literacy depended on a person’s place of childhood, age, level of education, frequency of use at home and the indigenous language spoken by the person’s mother.

Keywords: indigenous languages, language literacy, test of independence, intergenerational transmission, logistic regression

Download this article:

Year: 2020       Vol.: 69       No.: 2      


Record ID: 130    [ Page 1 of 2, No. 31 ]

Examining the Theoretical Assumption of a Six-fold Structure of Management Competency Sub-scales (MCS)

Authors: Manuelito De Vera Bengo

Abstract:

Management Competency Sub-scales (MCS), in this current inquiry was being constructed under the theoretical assumption of a sixfold structure: self-image; leadership; skills; action; performance; and orientation. However, there is so far no empirical evidence available to support this assumption. Thus, a comprehensive measure is therefore needed to adequately gauge its validity. The aim of this present study was to assess the validity of the categorization of management competencies into the six overarching MCS. The results showed that instead of the expected six-fold structure, MCS comprised eight factors: expertise; self-image; skills; leadership; innovation; influencer, sustainability; and orientation. Looking at homogeneity and scale length in tandem, the scales were subsequently refined and the number of items reduced to 45. The inter-correlations between the derived sub-scales, as well as the mean loadings of the items on the sub-scales, were significant, indicating the validity of the construct. The Cronbach alphas for the different sub-scales were found to be of an acceptable level, above .7 suggesting relative stability of the derived scales. Comparing the inter-scale correlations of the management competencies sub-scales with their average Cronbach alpha, the values were found to be substantially different, providing support for the discriminant validity of the construct. Furthermore, conducting a second-order factor analysis, all the sub-scales loaded above .3 on the one extracted, suggesting convergent validity. The study provides a good alternative to the bounty of competency models/frameworks that have been developed in the area of management competency.

Keywords: management competency sub-scales, factor analysis, validity, cronbach alpha, inter-correlations

Download this article:

Year: 2020       Vol.: 69       No.: 2      


Record ID: 129    [ Page 1 of 2, No. 32 ]

A Validation of the Non-Parametric Continuous Norming Procedure

Authors: Melissa Jane Siy and Francisco N. de los Reyes

Abstract:

This study analyzed and validated the statistical aspect of the nonparametric continuous norming technique, which is a method used in creating scores in psychometric tests. Using the Work Profile Questionnaire - Emotional Intelligence (WPQei) with Filipino sample respondents, the study was able to demonstrate how the norming technique can be used to create age-group-based scores (age norms). Based on the results, the models from the technique can produce useable scores in practice with acceptable adjusted R-squared values; however, some challenges emerged with regard to the process of choosing the smoothing parameter, consistency of the significant variables and coefficient signs of the model, and the calculation of the score tables. Bootstrapping is recommended in improving the robustness of the technique.

Keywords: norming, WPQei, age-group-based score tables, bootstrap

Download this article:

Year: 2020       Vol.: 69       No.: 2      


Record ID: 128    [ Page 1 of 2, No. 33 ]

A Sequential Markov Chain Model of FIFA World Cup Winners

Authors: Nehemiah A. Ikoba

Abstract:

In this paper, a sequential Markov chain conceptualization of the winners of the FIFA World Cup is presented. The aim was to capture the dynamics of the World Cup and predict the future winner via Markov chain analysis. A sequentially incremented state-space Markov chain is used to approximate the process of winning the FIFA World Cup. The corresponding Markov chains at every epoch where the state space increases were computed. The result of the analysis showed a close predictive ability of the model to predict the previous World Cup winners. It is predicted that a new winner may emerge in the 2022 World Cup in Qatar. However, if a new winner does not emerge, on the basis of both the sequential Markov chain and the first passage matrix of the conceptualized model, then Brazil is the most probable winner of the 2022 World Cup, followed by Italy and Germany. The sequential Markov chain approach can be applied to other sporting events and scenarios in which there is only a small probability that the number of observed states may increase from a small set of states.

Keywords: stochastic processes, sports analytics; transition probability matrix, mean first passage time, mean recurrence time

Download this article:

Year: 2020       Vol.: 69       No.: 2      


Record ID: 127    [ Page 1 of 2, No. 34 ]

Recursive Quantile Estimation through a Stochastic Algorithm

Authors: A. Bachir and K. Djeddour-Djaballah

Abstract:

In this paper, we propose an estimate of a quantile of an unknown population. By considering this problem as a stochastic approximation problem, we obtain an estimator of the quantile and provide the almost-sure convergence as well as the asymptotic normality of this estimator. Some simulation results are presented to show that the proposed estimator works well.

Keywords: stochastic approximation, non-parametric estimation, quantile estimation

Download this article:

Year: 2020       Vol.: 69       No.: 1      


Record ID: 126    [ Page 1 of 2, No. 35 ]

Penalty Analysis with Resampling Method for Sensory Evaluation

Authors: Reanne Len C. Arlan, James Roldan S. Reyes, and Mary Denelle C. Mariano

Abstract:

Penalty analysis is the most widely accepted approach in consumer oriented testing for sensory evaluation dealing with just about right (JAR) data. However, performing test of significance of penalizing sensory attribute on the overall acceptability of a product is limited due to the absence of mean drop’s standard error. To address this limitation, this paper applied alternative approaches using bootstrap and jackknife resampling. Using sensory evaluation data of the soft serve vanilla-based ice cream of a certain fast food company, the ordinary and the proposed jackknifing penalty analyses yielded similar results. While both analyses found that “too much sweetness” as the most troublesome attribute, bootstrapping penalty analysis showed having a “too weak vanilla flavor” has an effect of 0.45 mean drop point from the product’s overall acceptability. Through empirical validation, jackknife mean drop estimates were found to be closer to the original mean drops and have smaller estimates of standard error as compared with the bootstrap mean drop estimates. Overall, bootstrapping penalty analysis was more conservative in determining critical sensory attributes. Moreover, the proposed jackknifing penalty analysis was able to depict similar results of the ordinary penalty analysis with further ability to calibrate problematic sensory attributes based on statistical evidence. This can offer a more powerful and useful evaluation tool for product development in the food industry.

Keywords: bootstrapping, jackknifing, just about right scale, 9-point hedonic score, mean drop, product development

Download this article:

Year: 2020       Vol.: 69       No.: 1      


Record ID: 125    [ Page 1 of 2, No. 36 ]

Evaluation of Sampling Methods for Content Analysis of Facebook Data

Authors: Xavier Javines Bilon and Jose Antonio R. Clemente

Abstract:

A methodological challenge for researchers performing content analysis on social media data involves deciding on a sampling procedure for obtaining content to be analyzed with least sampling error. The study used and recommended two different kinds of elementary unit—post and day—that allow probability sampling of Facebook data, regardless of whether the sampling frame of all posts within the time period of interest is obtainable. Four sampling designs for post as elementary unit and five for day as elementary unit—including three commonly used sampling options for content analysis: simple random sampling without replacement (SRSWOR), constructed week sampling, and consecutive day sampling— were employed on Facebook data mined from Mocha Uson Blog from 2010 to 2018. Estimates for parameters, such as measures of user engagement and proportions of topic-related posts, were obtained at increasing sample sizes. Sampling designs for each elementary unit were evaluated by comparing the normalized area under the coefficient of variation curve (NAUCV) over the different sample sizes. For post as elementary unit, with content type as the stratification variable, stratified random sampling (StRS) using Neyman allocation based on total user engagement is recommended (average NAUCV = 31.28%). For day as elementary unit, SRSWOR is recommended (average NAUCV = 42.31%).

Keywords: content analysis, sampling method, social media analytics

Download this article:

Year: 2020       Vol.: 69       No.: 1      


Record ID: 124    [ Page 1 of 2, No. 37 ]

Bounds Testing Approach in Determining the Impact of Climate Change Indicators to the Rice Yield of Central Luzon

Authors: Hernan G. Pantolla and Rechel G. Arcilla

Abstract:

Crop production, among many others, is threatened by climate change. In the Philippines, one such crop is rice. Various models have been applied to investigate factors that affect the production of this grain. Specialized software has been utilized that simulates its production under different conditions. But these were done on a larger scale. Local and international organizations call for the creation of crop production models that are specific to locations which can describe their long- and short-term responses to the changes in the environment. Hence, a model is proposed that made use of the Bounds Testing feature of the Autoregressive Distributed Lag Model (ARDL) to determine the long-run and short-run effects of selected climate change indicators to the rice yield of Central Luzon, the Rice Granary of the Philippines. The findings showed that the speed of adjustment towards equilibrium is 1.26%. Precipitation, temperature, and atmospheric carbon dioxide concentration have long-run impacts, while the lagged differences of the yield itself and temperature as well as the first difference temperature have short-run effects. Sea surface temperature anomaly was found to have no significant contribution. The fitness of the generated ARDL Model was substantiated by several diagnostic tests performed. In addition, it showed gains in forecasting accuracy when compared with a baseline model. The results of this study could be used in decision-making and policy developments.

Keywords: autoregressive distributed lag, bounds testing, climate change indicators, rice yield, Central Luzon

Download this article:

Year: 2020       Vol.: 69       No.: 1      


Record ID: 123    [ Page 1 of 2, No. 38 ]

Analyzing the Impact of RPRH Law Implementation on Poverty Reduction in the Philippines

Authors: Michael Ralph M. Abrigo, Aniceto C. Orbeta Jr. and Alejandro N. Herrin

Abstract:

A key research question with relevance to policy is how the full implementation of the RPRH law, with attention to its three key elements, namely: comprehensive sexuality education, family planning, and maternal and child health, contributes to economic growth and poverty reduction. This study describes the various pathways in which these key elements would work their way to affecting fertility, human capital formation, and economic growth and poverty reduction. Among the pathways considered, recent studies have provided into the economic and social impact of preventing early childbearing, reducing maternal mortality, and preventing child stunting. However, a gap exists in measuring the effect of achieving couples desired fertility on economic growth and poverty reduction through its impact on the age structure, investments in human capital, and productivity. The study addresses this gap by estimating the economic gains from a full implementation of the RPRH law, with attention on the family planning component as opposed to delayed implementation or no implementation scenarios (“business as usual”). The results show that helping couples achieve the desired number of children can potentially have substantial economic benefits in terms of more rapid economic growth arising from the first and second demographic dividends, which in turn accelerate poverty reduction both in terms of incidence and number of population.

Keywords: RPRH law, total fertility rate, demographic dividends, poverty reduction

Download this article:

Year: 2020       Vol.: 69       No.: 1      


Record ID: 122    [ Page 1 of 2, No. 39 ]

Consumer Expectations Survey and Quarterly Social Weather Survey: Evidence of Convergent Validity and Causality

Authors: Edsel L. Beja Jr.

Abstract:

The paper tests the convergent validity and causality of the Consumer Expectations Survey from the Bangko Sentral ng Pilipinas and the Quarterly Social Weather Survey from the Social Weather Stations. The results indicate that there is convergent validity; and that there is bi-direction causality. Further results reveal that both share a common set of determinants. Overall, the findings imply that the Consumer Expectations Survey and the Quarterly Social Weather Survey embody comparable information. As such, one can be a proxy measure of the other. For policy, the findings support the view that a monetary approach for controlling the overall performance of the country, especially with regard to the inflation rate, in conjunction with a fiscal approach for securing the provision of basic social services are key to an effective management of sentiments and for an improvement in the quality of life.

Keywords: Consumer Expectations Survey, Quarterly Social Weather Survey, convergent validity, cointegration, causality, Toda-Yamamoto procedure

Download this article:

Year: 2019       Vol.: 68       No.: 2      


Record ID: 121    [ Page 1 of 2, No. 40 ]

The Impact of Basic Education Reform on the Educational Participation of 16- to 17-year-old Youth in the Philippines

Authors: Geoffrey M. Ducanes and Dina Joan S. Ocampo

Abstract:

The study measures the impact on the school participation of 16 to 17-year-old learners in the Philippines of the implementation of the Senior High School program (SHS), which came into full effect in school year 2017–2018. The SHS program, which extended secondary education in the country from four to six years, was the most ambitious education reform action in the country in recent memory. The study found that the SHS program resulted in an increase in overall school participation rate of at least 13 percentage points among 16 to 17-year-olds. Perhaps more importantly, the increase in school participation rate was found to be highly progressive with those 16 to 17-year-olds in the two bottom income quintiles experiencing the highest increase in school participation rates by a wide margin. The study also found that both male and female students benefited from the program, although the gains appear to be higher for female students. Most of the gains in school participation were also found to occur outside Metro Manila.

Keywords: impact evaluation, logit regression, education reform, senior high school, gender in education

Download this article:

Year: 2019       Vol.: 68       No.: 2      


Record ID: 120    [ Page 1 of 2, No. 41 ]

Rapid Assessment of Real Estate Loan Disapproval via Predictive Modeling: A Case for the Philippines

Authors: Adrian Nicholas A. Corpuz and Joseph Ryan G. Lansangan

Abstract:

The Philippines is currently experiencing a housing backlog and is expected to reach 6.5 million by the year 2030 if nothing is done about this. It is in this context where the government and the private sector have partnered themselves to address the backlog. Financing institutions such as private banks and the Home Development Mutual Fund (i.e. Pag-IBIG) offer different home loans for Filipinos to be able to afford these houses. Using a local real estate development’s dataset, the study explores the application of predictive models in quickly determining whether a client will likely be able to get a home loan approved or not once he or she submits the preliminary documents for a home loan. Results show that in terms of accuracy, decision trees and random forest are superior in predicting home loan disapproval than binary logistic regression. The best predictive model is the random forest model, and results show that the main determinants of getting a home loan approved are loan equity term, total contract price of the house, equity payment status, and the income of the client.

Keywords: real estate, home loan, binomial logistic regression, decision tree, CART, CTree, CHAID, random forest

Download this article:

Year: 2019       Vol.: 68       No.: 2      


Record ID: 119    [ Page 1 of 2, No. 42 ]

Spatio-temporal Analysis of Animal Rabies Cases in Negros Occidental, Philippines from 2012 to 2018

Authors: Joseph L. Arbizo, Philip Ian P. Padilla, Marilyn S. Sumayo, Mitzi N. Meracap, Andrea Marie N. Napulan, Rex Victor V. Dolorosa, Princess Monic Q. Velasco, Leslie S. Asorio, Thea Joy A. Clarito, James Matthew V. Recabar, Sael D. Rodriguez

Abstract:

Rabies is a dangerous and deadly zoonotic disease that infects domestic and wild animals and is transmissible to humans. Animal rabies, particularly of canine and feline type, is considered to be a serious threat to public health. Thus, all prevention and control efforts to reduce the cases of human rabies are stemming from the identification of high-risk communities where presence of canine or feline rabies cases are prevalent. Having recorded the highest number of cases in recent years, this research utilized the spatiotemporal analysis of animal rabies cases in Negros Occidental, Western Visayas, Philippines. The hotspot analysis was based on Getis-Ord-Gi* statistic to estimate statistically significant hotspots of animal rabies cases in the province. Mean center and standard deviational ellipse were performed to identify the epicenter, dispersion, and yearly directional trends of animal rabies cases. The emerging hotspot analysis based on the Getis-Ord-Gi* and Mann- Kendall statistics was performed to identify statistically significant clusters with significant temporal trend. Spatial analysis identified the major cities such as Bacolod City and Bago City and their surrounding cities and municipalities to be of high risk to animal rabies cases from 2012 to 2018. The epicenter of cases is slowly shifting from the northern part in earlier years towards the central part of the province in recent years. Twenty-six (26) space-time clusters of animal rabies cases in Negros Occidental were found to have “intensifying”, “consecutive”, “oscillating”, and “sporadic” time trends. Two clusters classified as “new” hotspots were identified in the central part of the province. Results presented in this study could be of service for rabies cases surveillance, and in developing care and prevention programs for rabies control.

Keywords: animal rabies, spatio-temporal analysis, zoonosis, rhabodoviruses

Download this article:

Year: 2019       Vol.: 68       No.: 2      


Record ID: 118    [ Page 1 of 2, No. 43 ]

Influence of Physicochemical Water Parameters on the Total Weight of the Slipper-shaped Oyster Crassostrea iredalei in Visayas, Philippines

Authors: Michelle B. Besana, Ma. Ramela Angela C. Bermeo, and Philip Ian P. Padilla

Abstract:

Annual assessments of the total weights of the most abundant oyster species in the Philippines, Crassostrea iredalei, for two consecutive years were examined across ten different sampling sites in Visayas, Philippines. The ANCOVA model was used to investigate the effects of the different sampling sites and the different physicochemical water parameters as covariates on the log total weight. The ANOVA model was also used to examine site differences in the log total weight without taking into account the effects of the covariates. ANOVA and ANCOVA results were compared to distinguish site differences with and without the covariates in the model. The results from the ANOVA model revealed that there were significant differences in the mean log total weight between sites. In the final ANCOVA model, there were still significant differences in the mean log total weight between sites above and beyond the significant positive covariate effect of temperature. The observed variations in the total weight of oysters is most likely due to the varied underlying internal and external factors that affect oyster culture in their respective ecological habitat. The study also reflects both vulnerability and coping mechanism of the Philippine C. iredalei with the variations in temperature which are critical for developing tolerance for positive growth and survival. The findings of this study could promote patterns of selective breeding and culture practices with the additional consideration of environmental factors that would lead to a better understanding of the changing environmental conditions operating in the different culture sites that would help ensure better culture management and harvest.

Keywords: analysis of variance (ANOVA), analysis of covariance (ANCOVA), Tukey-Kramer method, physicochemical water parameter, slipper-shaped oyster Crassostrea iredalei

Download this article:

Year: 2019       Vol.: 68       No.: 2      


Record ID: 117    [ Page 1 of 2, No. 44 ]

Comparison of Official Data Sources and Construction of a Sampling Frame for Household-based Livestock Surveys in Nueva Ecija, Philippines

Authors: Anna Ma. Lourdes S. Latonio, Isidoro P. David, and Zita V.J. Albacea

Abstract:

A basic issue in designing a procedure on selecting a sample is how well the sampling frame corresponds to the target population from where the sample will come from, making the construction of the sampling frame a vital aspect in the development of a sampling design. The main purpose of this paper is to evaluate and compare different official data sources to be able to combine them into a sampling frame appropriate for use in household-based livestock operations surveys for major livestock types (carabao, cattle, goat, and swine) in the Province of Nueva Ecija. The data values of different official data sources available such as the Philippine Carabao Center Inventory (PCCI), Livestock and Poultry Survey (LPS), Barangay Agricultural Profiling Survey (BAPS), and Local Government Unit (LGU) records during 2007-2010 for the province were compared. Scatter plots, strengths of relationships and relative differences between observations on the same variables were analyzed. Specifically, relevant barangay level data common in the different surveys were used to perform the comparisons. Based on the assessment of the different data sources, a barangay level Household Based Livestock Frame (HBLF) was constructed. The constructed HBLF consists of 849 barangays as basic sampling units, each with attached livestock related information, and auxiliary information such as total rice area (TRA) and number of rice growers (NRG). The constructed HBLF can be used in the development of separate or combined sampling designs for household-based livestock surveys in the Province of Nueva Ecija.

Keywords: Agriculture, livestock, sampling, sampling frame, livestock inventory

Download this article:

Year: 2019       Vol.: 68       No.: 2      


Record ID: 116    [ Page 1 of 2, No. 45 ]

Sampling with Probability Proportional to Aggregate Size in Heterogeneous Populations: A Study of Design and Efficiency

Authors: Daniel David M. Pamplona

Abstract:

Sampling with probability proportional to aggregate size (PPAS) is compared with traditional design-unbiased sampling methods under different simulated population scenarios in the estimation of the population total. The study considered both accuracy and precision of the estimates in the comparison. Heterogeneous populations were simulated by exploring varying behaviors of an auxiliary variable and its relationship with the target variable. Results show that the optimality of estimates using PPAS sampling improve as the association between the target variable and auxiliary variable strengthens. Furthermore, PPAS sampling estimates are more stable under large variability in the population.

Keywords: Probability Proportional to Aggregate Size Sampling, Nonparametric Bootstrap, Simple Random Sampling, Probability Proportional to Size Systematic Sampling

Download this article:

Year: 2019       Vol.: 68       No.: 2      


Record ID: 115    [ Page 1 of 2, No. 46 ]

Nonparametric Test of Interaction Effect for 22-Factorial Design with Unequal Replicates: Case of Poisson-Normal Multivariate Data

Authors: Mara Sherlin D. Talento, Marcus Jude P. San Pedro and Erniel B. Barrios

Abstract:

Multivariate Analysis of Variance (MANOVA) is fairly robust to the normality and constant variance assumptions provided that the data is generated from a balanced design. Issues with hypothesistesting arises when error-distribution is non-normal or when the data is generated from an unbalanced design. We propose a nonparametric method of testing interaction effect of the twofactor factorial design with multivariate response and possibly highly unbalanced replicates. Simulation studies indicated that the test is correctly-sized and increasing power with increasing effect-size, and increasing sample size. The parametric test based on MANOVA is incorrectl

Keywords: two-factor factorial design, unbalanced replicates, nonparametric, multivariate data, Poisson and Normal data

Download this article:

Year: 2019       Vol.: 68       No.: 2      


Record ID: 109    [ Page 1 of 2, No. 47 ]

Life in the Fast Food Lane: Understanding the Factors Affecting Fast Food Consumption among Students in the Philippines

Authors: Adina Faye Bondoc, Hannah Felise Florendo, Emilio Jefe Taguiwalo and John Eustaquio

Abstract:

The fast food industry in the Philippines is growing rapidly and is dominating the food service establishments. Together with this influx in fast food establishments is the increase in fast food consumption and an emergence of an unhealthy lifestyle and increase in obesity prevalence, not only among Filipinos, but around the world. The growth of the fast food industry has been aggressive, especially with its advertisements which have been known to target families and the youth. Previous studies have shown that the youth tend to be more affected by fast food obesity than adults. With this, the researchers decided to create a model for whether students eat at fast food chains using the 2011 Global school-based Student Health Survey in the Philippines. Before modelling, factor analysis was performed to bracket variables together. A total of 6 factors arose--namely vices, assistance from others, injuries and bullying, hygiene, active lifestyle, and diet. In modelling using the original variables, various methods were used for variable selection to reduce the forty-seven variables to a manageable number of predictors. These methods were the independent Chi-squared tests, Fisher Exact Tests, Forward and Backward Selection, and Analysis of Deviance. The resulting model showed that some of the most significant predictors for whether or not a student eats fast food is their frequency of drinking soft drinks, eating fruits, and feeling hungry due to lack of food in the house. The weight and sex of a student also significantly affected the response, in which the odds of eating at a fast food chain were for men were 33.84% lower than that of women, and a kilogram increase in a student’s weight increased their odds by 2.5%.

Keywords: fast food, nutrition, factor analysis, logistic regression, Poisson loglinear model, negative binomial loglinear model, Global School-Based Student Health Survey

Download this article:

Year: 2019       Vol.: 68       No.: 1      


Record ID: 108    [ Page 1 of 2, No. 48 ]

Exploring the Disparities on the Actualization of the Ideal Number of Children among Filipino Women

Authors: Patrisha Brynne Agbayani, Kimberly Baltazar, Excel Franco and John Eustaquio

Abstract:

Family size has been consistently associated with poverty incidence as shown by household survey data over time. According to the 2010 Census of Population, the average household size stands at 4.6 members. With this current situation, this research delves deeper into this vital familial attribute by determining the factors that influence the levels of disparity between a woman’s actual and ideal number of children. Furthermore, the research aims to understand how the odds of actualization of the Filipina’s ideal number of children increases or decreases in relation to the factors that were considered in the study.

The regression model for multinomial response utilized in the study is the proportional-odds cumulative logistic regression model. The results of the study have shown that religious affiliation to the Roman Catholic church doubles the estimated odds of exceeding the ideal number of children among women. Meanwhile, the odds of exceeding the desired number of children decrease as financial status improves. Husband-related factors also affect the actualization of the ideal number of children. More importanly, the discrepancy on the woman and husband’s ideal number of children, as well as the experience of emotional abuse from spouse both leads to an increase in the estimated odds of exceeding desired fertility. Lastly, a woman’s use of contraceptives to delay or avoid pregnancy and whether the woman wanted her last pregnancy are also significant factors that affect the actualization of a woman’s ideal number of children.

Keywords: fertility preference, contraceptive behavior, cumulative logit model, multiple response models

Download this article:

Year: 2019       Vol.: 68       No.: 1      


Record ID: 107    [ Page 1 of 2, No. 49 ]

Optimal Variable Subset Selection Problem in Regression Analysis is NP-Complete

Authors: Paolo Victor T. Redondo

Abstract:

Combinatorial and optimization problems are classified into different complexity classes, e.g., whether an algorithm that efficiently solve the problem exists or a hypothesized solution to the problem can be quickly verified. The optimal selection of subset variables in regression analysis is shown to belong to a complexity class called NP-hard (Welch, 1982) in which solutions to the problems in the same class may not be easily (in terms of computing speed) proven optimal. Variable selection in regression analysis based on correlations is shown to be NP-hard, i.e., a complexity class of problems with easily verifiable solutions.

Keywords: optimal variable selection, regression analysis, np-completeness

Download this article:

Year: 2019       Vol.: 68       No.: 1      


Record ID: 106    [ Page 1 of 2, No. 50 ]

Computing the Combined Effect of Measurement Errors and Non-Response using Factor Chain-type Class of Estimator

Authors: Gajendra K. Vishwakarma, Neha Singh and Amod Kumar

Abstract:

In this article, we have suggested an efficient factor chain-type class of estimators in the presence of measurement error and nonresponse simultaneously. It is shown that several estimators can be generated from our proposed class of estimators. Mean Square Error of the proposed class of estimator are derived and compared with other existing estimators. The conditions under which proposed estimator is more efficient are obtained. A theoretical and empirical study has done to demonstrate the efficiency of this estimator over other existing estimators.

Keywords: Auxiliary variable, bias, mean square error, measurement errors, nonresponse, study variable

Download this article:

Year: 2019       Vol.: 68       No.: 1      


Record ID: 105    [ Page 1 of 2, No. 51 ]

Performance Evaluation and Comparison of Integer- Valued Time Series Models for Measles Outbreak Detection in Cavite

Authors: Vio Jianu C. Mojica and Frumencio F. Co

Abstract:

An ideal outbreak detection algorithm must be able to generate alarms early into an outbreak while providing optimal sensitivity and specificity so as to mitigate mortality and other potential costs of investigation and response to these events. One particular disease of interest is measles, which is a highly contagious disease that exhibited periodic outbreaks in the Philippines. The performance of the NGINAR(1) and ZINGINAR(1) models for measles outbreak detection was examined through the use of simulated datasets and an actual application to reported measles cases in the Cavite province from 2010 to 2017. The models were evaluated based on their goodness-of-fit as well as the sensitivity, specificity, and timeliness of the detection thresholds they have generated. Comparisons were done against ARIMA models and the popular Poisson INAR(1) model. Results show that INAR models have considerably higher probabilities of detection than ARIMA models, particularly for outbreaks of small magnitudes. The Poisson INAR(1) generates the most alarms and thus, has the highest sensitivity metrics. The NGINAR(1) and ZINGINAR(1) models, however, have lower false positive rates with outbreak detection capabilities comparable to the Poisson INAR(1). The NGINAR(1) model may be chosen as the best model considering its simplicity and its balance of sensitivity, specificity, and timeliness which is optimal for a disease such as measles.

Keywords: NGINAR(1), ZINGINAR(1), measles, outbreak detection, Cavite

Download this article:

Year: 2019       Vol.: 68       No.: 1      


Record ID: 114    [ Page 1 of 2, No. 52 ]

Investigating Dissimilarity in Spatial Area Data Using Bayesian Inference: The Case of Voter Participation in the Philippine National and Local Elections of 2016

Authors: Francisco N. de los Reyes

Abstract:

A commonly studied characteristic of area data is the assessment of similarity (or absence thereof) among neighboring areal units. However, most methodologies do not measure uncertainties which are likely outcomes of sampling variation and do not consider spatial autocorrelation. This paper explores the ability of Bayesian modeling to address the said situations. It attempts to apply this modeling technique to the voting participation statistics in the Philippine National and Local Elections of 2016.

Keywords: conditional autoregressive (CAR), proximity matrix, dissimilarity, voter turnout

Download this article:

Year: 2018       Vol.: 67       No.: 1      


Record ID: 113    [ Page 1 of 2, No. 53 ]

Measuring Market Risk with the Folded Peaks-Over-Thresholds Approach

Authors: Peter Julian Cayton

Abstract:

In this paper, we discuss the folding procedure for the peaks-overthresholds (POT) models and their applications in market risk measurement, namely the value-at-risk (VaR) and the expected shortfall (ES). Folding is defined as a procedure in which when data fall below a certain threshold value, a transformation formula will move the data points above the threshold. First, an initial fitting with the generalized Pareto distribution (GPD) over a temporary threshold is done. Second, from the initially-fitted GPD estimates and a newly-selected threshold, a folding transformation of moves the data points lower to the new threshold to higher values. Third, the data points higher than the new threshold are fit to the GPD for inference and risk estimation. The risk measures from the folded GPD approach are compared with the ARMA-GARCH financial econometric and the unfolded POT approach in terms of their performance in real financial time series data such as the stock indices and foreign currencies. The benefit of folding in the POT approach is lower estimates of standard errors for the GPD parameters given that an appropriate threshold has been selected. These would indicate more accurate GPD parameter estimates that lead to better VaR and ES estimates. The real data application results show that the VaR and ES from the folded POT methodology have less exceedances. Loss calculations indicate that those folded POT might mean higher capital adequacy, the conservatively set VaR and ES would cushion from extreme losses incurred from exceedance events.

Keywords:

Download this article:

Year: 2018       Vol.: 67       No.: 1      


Record ID: 112    [ Page 1 of 2, No. 54 ]

Employment Correlates of Multidimensional Poverty in the Philippines

Authors: Manuel Leonard Albis and Jessmond Elviña

Abstract:

Multidimensional poverty index (MPI) captures more welfare characteristics than the income- or expenditure-based poverty measures. It is an emerging social statistic, which must be understood to guide poverty alleviation policies. This paper finds robust employment characteristics on MPI using Bayesian averaging of classical estimates (BACE). The results indicate that being employed decreases MPI but length and nature of employment add to the MPI. Community public goods, as well as remittances, decrease the MPI, among other control variables considered. Priority through uplifting policy measures should be given more to laborers who are working for different employers than contractual workers if the aim is to reduce MPI.

Keywords: MPI, underemployment, BACE

Download this article:

Year: 2018       Vol.: 67       No.: 1      


Record ID: 111    [ Page 1 of 2, No. 55 ]

Coping with Disasters Due to Natural Hazards: Evidence from the Philippines

Authors: Majah-Leah Ravago, Dennis Mapa, Jun Carlo Sunglao and James Roumasset

Abstract:

We explored how local governments respond to disasters due to natural hazards to determine the mix of risk management and coping strategies (ex ante and ex post) they employ to improve welfare. We focused on disasters caused by hydro-meteorological hazards that occur with high frequency and high probability. Using data from a novel survey we conducted on disaster risk management practices of local government units (LGUs) in the Philippines, we developed indices of the various risk management and coping strategies of LGUs to explain what aids in their recovery from disasters. The most prominent strategies are risk-coping activities, especially cleanup operations and receiving relief from others. Among ex ante activities, employing long-term precautionary measures improve recovery. These include building resilient housing units; investing in stronger public facilities; building dams, dikes, and embankments; upgrading power and water lines; maintaining roads; identifying relocation areas; and rezoning and land-use regulations. In contrast, interruption of lifeline services such as water and electricity contributes adversely to recovery. Evidence also shows that LGUs’ profile characteristics matter. An LGU with higher local revenues has higher chances of recovery. On the other hand, being located in a province where dynasty share is high contributes negatively to an LGU’s recovery. The combination of these ex ante and ex post risk management strategies informs policies on where to put priority and investments in disaster risk management.

Keywords: Disaster, shock, coping, risk management, local government

Download this article:

Year: 2018       Vol.: 67       No.: 1      


Record ID: 110    [ Page 1 of 2, No. 56 ]

The Multidimensional Approach to Measuring Poverty

Authors: Lisa Grace S. Bersales, Divina Gracia L. del Prado and Mae Abigail O. Miralles

Abstract:

The current official measurement of poverty published by the Philippine Statistics Authority is based on income. This does not capture the multidimensional deprivations suffered by Filipinos. This paper discussed a multidimensional poverty index (MPI) for the Philippines using four (4) dimensions with thirteen (13) indicators. These dimensions are education; health and nutrition; housing, water and sanitation; and employment. The Alkire Foster (AF) method in computing multidimensional poverty measures is adopted with nested uniform weights as the weighting scheme and 1/3 as poverty cutoff. Various weighting schemes are also explored in this study - nested inverse incidence and subjective welfare, and other poverty cutoffs studied are 1/4 and 1/5. Results revealed that the selection of weighting scheme and poverty cutoff do not greatly affect the trend of the multidimensional poverty measures and the ranks of the dimensions in terms of their contribution to multidimensional poverty.

Keywords: multidimensional poverty, MPI, poverty, headcount ratio, intensity

Download this article:

Year: 2018       Vol.: 67       No.: 1      


Record ID: 104    [ Page 1 of 2, No. 57 ]

Economic Mobility in Urban Southeast Asia: The Case of the Philippines and Indonesia

Authors: Novee Lor Leyso, Arturo Martinez Jr., and Iva Sebastian

Abstract:

Recognizing that urban areas play a key role in addressing poverty and inequality in line with the Sustainable Development Goals (SDGs) 1 and 10, respectively, it is necessary to understand the dynamics of economic well-being of people living in urban areas to be able to formulate appropriate and effective strategies. Using economic mobility as a metric of well-being, this study aims to examine whether population size of urban areas has an impact on people's mobility prospects. We investigate this issue using longitudinal expenditure data from Indonesia and the Philippines. Our results show that city size has mixed effect on directional mobility in Indonesia and the Philippines; it has a negative but significant impact on the probability of Indonesians to experience upward mobility, but its effect on the probability of Filipinos to experience upward mobility is positive. On the other hand, in both countries, people living in megacities and micro urban areas experience more non-directional mobility with respect to several economic mobility measures.

Keywords: Economic mobility, Urbanization, Urban Poverty, Inequality, City Size, Panel Data, and Multinomial Logistic Regression

Download this article:

Year: 2017       Vol.: 66       No.: 2      


Record ID: 103    [ Page 1 of 2, No. 58 ]

Understanding the Ideal Number of Children and Contraceptive Practices of Filipino Women through Generalized Linear Models

Authors: Isabella Benabaye, Patricia Rose Donato and John D. Eustaquio

Abstract:

In making and assessing family planning policies and programs, it is vital to investigate fertility preference as it does not only reveal a woman's ideal number of children and the couple's consensus on it, but also captures information on unwanted and mistimed pregnancies. The theoretical relationships of a woman's ideal number of children with micro-level factors such as a woman's experience with child mortality, her level of household authority, and household family planning awareness were examined under two cases. First, among women who have achieved their fertility preference, and secondly, among women who have not achieved their fertility preference. This study also examined the factors affecting the contraceptive behavior of women who have not achieved their fertility preference, specifically for a) contraceptive users, b) non-users who intend to use contraceptives later, and c) non-users with no intention to use. The difference in the behavior of factors influencing the ideal number of children between women who have and have not met their fertility preference showed that instead of factors related to family planning, the ideal number of children for women with unmet fertility preference is decreased by factors that suggest lack of women's empowerment. On the other hand, analysis on contraceptive behavior found possible factors that can hinder the realization of women's intention to practice contraception.

Keywords: fertility preference, contraceptive behavior, poisson count model, binary regression

Download this article:

Year: 2017       Vol.: 66       No.: 2      


Record ID: 102    [ Page 1 of 2, No. 59 ]

Measles Outbreak Detection in Metro Manila: Comparisons between ARIMA and INAR Models

Authors: Joshua Mari J. Paman, Frank Niccolo M. Santiago, Vio Jianu C. Mojica, Frumencio F. Co, and Robert Neil F. Leong

Abstract:

It is the goal of many developing countries to stop the spread of diseases. Part of this effort is to conduct ongoing surveillance of disease transmission to foresee future epidemics. However, in the Philippines, there is a lack of an automated method in determining their presence. This paper presents a comparison between an integer-valued autoregressive (INAR) model and the more commonly known autoregressive integrated moving average(ARIMA) models in detecting the presence of disease outbreaks. Daily measles reports spanning from January 1, 2010 to January 14, 2015 were obtained from the Department of Health and were used to motivate this study. Synthetic datasets were generated using a modified Serfling model. Similarity tests using a dynamic time warping algorithm were conducted to ensure that simulated datasets observe similar behavior with the original set. False positive rates, sensitivity rates, and delay in detection were then evaluated between the two models. The results gathered show that an INAR model performs favorably compared to an ARIMA model, posting higher sensitivity rates, similar lag times, and equivalent false positive rates for three-day signal events.

Keywords: measles, biosurveillance, integer-valued autoregressive model, Serfling model, dynamic time warping

Download this article:

Year: 2017       Vol.: 66       No.: 2      


Record ID: 101    [ Page 1 of 2, No. 60 ]

Modeling Rare Events using a Zero-Inflated Poisson (ZIP) Distribution: Some New Results on Point Estimation

Authors: Suntaree Unhapipat, Nabendu Pal and Montip Tiensuwan

Abstract:

This paper takes a fresh look on point estimation of model parameters under a Zero-Inflated Poisson (ZIP) distribution. The reason is that some finer details of point estimation, if overlooked, may lead to wrong estimates as was done by the earlier researchers. In this paper we have achieved the following new results: (a) A new set of corrected method of moments estimators has been proposed; (b) We have shown how the standard technique of differentiating the log-likelihood function to find the maximum likelihood estimators may lead to wrong estimates, as well as how to avoid this problem; and (c) A new adjusted maximum likelihood estimation technique has been proposed which not only produces meaningful estimates always, but also appears to work better compared to all other estimation techniques in terms of standardized mean squared error (SMSE) when ZIP is used to model rare events. Finally, datasets on rare events have been used to demonstrate the estimation techniques, and how the ZIP distribution can be used to model such datasets.

Keywords: Maximum likelihood estimation, method of moments estimation, standardized mean squared error, standardized bias, goodness of fit test.

Download this article:

Year: 2017       Vol.: 66       No.: 2      


Record ID: 100    [ Page 1 of 2, No. 61 ]

Zero-Truncated New Quasi Poisson-Lindley Distribution and its Applications

Authors: Rama Shanker and Kamlesh Kumar Shukla

Abstract:

A zero-truncated new quasi Poisson-Lindley distribution (ZTNQPLD), which includes zero-truncated Poisson-Lindley distribution (ZTPLD) as a particular case, has been studied. Its probability mass function has also been obtained by compounding size-biased Poisson distribution (SBPD) with an assumed continuous distribution. The rth factorial moment of ZTNQPLD have been derived and hence its raw moments and central moments have been presented. The expressions for coefficient of variation, skewness, kurtosis, and index of dispersion have been given and their nature and behavior have been studied graphically. The method of maximum likelihood estimation has been discussed for estimating the parameters of ZTNQPLD. Finally, the goodness of fit of ZTNQPLD has been discussed with some datasets and the fit has been found better as compared with zero truncated Poisson distribution (ZTPD) and zero- truncated Poisson- Lindley distribution (ZTPLD).

Keywords: Zero-truncated distribution, New quasi Poisson-Lindley distribution, compounding, moments, Maximum Likelihood estimation, Goodness of fit.

Download this article:

Year: 2017       Vol.: 66       No.: 2      


Record ID: 99    [ Page 1 of 2, No. 62 ]

Regression and Variable Selection via A Layered Elastic Net

Authors: Michael Van B. Supranes and Joseph Ryan G. Lansangan

Abstract:

One approach in modeling high dimensional data is to apply an elastic net (EN) regularization framework. EN has the good properties of least absolute shrinkage selection operator (LASSO), however, EN tends to keep variables that are strongly correlated to the response, and may result to undesirable grouping effect. The Layered Elastic Net Selection (LENS) is proposed as an alternative framework of utilizing EN such that interrelatedness and groupings of predictors are explicitly considered in the optimization and/or variable selection. Assuming groups are available, LENS applies the EN framework group-wise in a sequential manner. Based on the simulation study, LENS may result to an ideal selection behavior, and may exhibit a more appropriate grouping effect than the usual EN. LENS results to poor prediction accuracy, but applying OLS on the selected variables may yield optimum results. At optimal conditions, the mean squared prediction error of OLS on LENS-selected variables are on par with the mean squared prediction error of OLS on EN-selected variables. Overall, applying OLS on LENS-selected variables makes a better compromise between prediction accuracy and ideal grouping effect.

Keywords: regression, variable selection, variable clustering, high dimensional data, elastic net, grouping effect

Download this article:

Year: 2017       Vol.: 66       No.: 2      


Record ID: 98    [ Page 1 of 2, No. 63 ]

Asymptotic Decorrelation of Discrete Wavelet Packet Transform of Generalized Long-memory Stochastic Volatility

Authors: Alex C. Gonzaga

Abstract:

We derive the asymptotic properties of discrete wavelet packet transform (DWPT) of generalized long-memory stochastic volatility (GLMSV) model, a relatively general model of stochastic volatility that accounts for persistent (or long-memory) and seasonal (or cyclic) behavior at several frequencies. We derive the rates of convergence to zero of between-scale and within-scale wavelet packet coefficients at different subbands. Wavelet packet coefficients in the same subband can be shown to be approximately uncorrelated by appropriate choice of basis vectors using a white noise test. These results may be used to simplify the variance-covariance matrix into a diagonalized matrix, whose diagonal elements have the least distinct variances to compute.

Keywords: discrete wavelet packet transform, generalized longmemory stochastic volatility, asymptotic decorrelation

Download this article:

Year: 2017       Vol.: 66       No.: 2      


Record ID: 96    [ Page 1 of 2, No. 64 ]

Survival Analysis for Weaning Time of the Palestinian Children

Authors: Ali H. Abuzaid and Raida F. Zaqout

Abstract:

This study addresses the factors that have an effect on the weaning time of the Palestinian children based on the Palestinian family survey data in 2006. It was found that the Weibull parametric model is the most appropriate one to fit the data. The study showed that factors such as child’s weight at birth, child’s age, mother’s age at delivery, and mother’s educational status have significant effects on the weaning time. The findings also revealed that factors such as mother’s refugee status, locality type, total live births, and mother’s smoking status do not have any significant effect at 0.05 level of significance on the duration of breastfeeding.

Keywords: breastfeeding, censored, Cox proportional model, Wald statistic

Download this article:

Year: 2017       Vol.: 66       No.: 1      


Record ID: 95    [ Page 1 of 2, No. 65 ]

Modeling Iloilo River Water Quality

Authors: Michelle B. Besana and Philip Ian P. Padilla

Abstract:

The analysis of covariance model (ANCOVA) with heterogeneous variance first-order autoregressive error covariance structure (ARH1) was used to model the differences in fecal streptococci concentration in Iloilo River over time with fixed site and seasonal effects as primary factors of interest, and water temperature, pH, dissolved oxygen, and salinity as covariates. The restricted maximum likelihood estimation (REML) procedure was used to derive the parameter estimates and the Kenward-Roger adjustment in the degrees of freedom was used to better approximate the distributions of the test statistics. The effect of season was highly significant (p = 0.0019). The site effect was significant at the 0.0539 level. The effects of water surface temperature and pH were significant at the 0.0655 and 0.0828 level, respectively. The effects of dissolved oxygen and salinity were not significant. Although the coefficient of determination was modest, the result of the study is useful in characterizing the dynamics of Iloilo River bacteriological system which contributes to an improved understanding of the Iloilo River water quality.

Keywords: analysis of covariance (ANCOVA), heterogeneous variance first-order autoregressive error covariance structure (ARH1), restricted maximum likelihood estimation (REML), fecal indicator bacteria (FIB), fecal streptococcus

Download this article:

Year: 2017       Vol.: 66       No.: 1      


Record ID: 94    [ Page 1 of 2, No. 66 ]

An Index of Financial Inclusion in the Philippines: Construction and Analysis

Authors: Mynard Bryan R. Mojica and Claire Dennis S. Mapa

Abstract:

Financial inclusion has become a policy priority in many developing countries, including the Philippines. However, the issue of its robust measurement is still outstanding. The challenge comes from the fact that financial inclusion is a multidimensional phenomenon. A comprehensive measure is therefore needed to adequately gauge the inclusiveness of a financial system. This paper constructed a Financial Inclusion Index (FII) to measure access to and usage of financial services in the Philippines using provincial data. Results show that while there are marked geographical disparities based on the FII, there is significant positive spatial autocorrelation indicating that nearby provinces exhibit similar levels of financial inclusion. The paper also showed the relationship between the FII and some variables that are often linked to financial inclusion such as income, poverty, literacy, and employment as well the province’s level of human development and competitiveness. On the methodological side, possible improvements and technical innovations in constructing the FII are laid out to maximize its potential as an analytical tool for surveillance and policy-making.

Keywords: inclusive finance, composite indicator, financial inclusion index

Download this article:

Year: 2017       Vol.: 66       No.: 1      


Record ID: 93    [ Page 1 of 2, No. 67 ]

Comparison of Regression Estimator and Ratio Estimator: A Simulation Study

Authors: Dixi M. Paglinawan

Abstract:

We compared ratio and regression estimators empirically based on bias and coefficient of variation. Simulation studies accounting for sampling rate, population size, heterogeneity of the auxiliary variable x, deviation from linearity and model misspecification were conducted. The study shows that ratio estimator is better than regression estimators when regression line is close to the origin. Ratio and regression estimators still work even if there is a weak linear relationship between x and y, provided that there is minimal, if not absent, model misspecification. When the relationship between the target variable and the auxiliary variable is very weak, bootstrap estimates yield lower bias. Regression estimator is generally more efficient than ratio estimator.

Keywords: auxiliary variable, ratio estimator, regression estimator, bootstrap estimator

Download this article:

Year: 2017       Vol.: 66       No.: 1      


Record ID: 92    [ Page 1 of 2, No. 68 ]

A Class of Ratio-Cum-Product Type Exponential Estimators under Simple Random Sampling

Authors: Gajendra K. Vishwakarma and Sayed Mohammed Zeeshan

Abstract:

In this paper, a class of ratio-cum-product type exponential estimators have been proposed under simple random sampling to estimate the population mean. The proposed class of estimators has been compared with the other existing estimators. We have compared the efficiency of the proposed class of estimators with the other standard estimators and found through empirical study that the previous estimators are inferior to the present proposed class of estimators. The population used in the empirical study are all varying very much from each other and thus it demonstrate the superiority of the present estimators under all type of situation.

Keywords: auxiliary variable, study variable, simple random sampling, ratio type estimators, product type estimators, bias, MSE

Download this article:

Year: 2017       Vol.: 66       No.: 1      


Record ID: 91    [ Page 1 of 2, No. 69 ]

An Exponentially Weighted Moving Average Control Chart for Zero-Truncated Poisson Processes: A Design and Analytic Framework with Fast Initial Response Feature

Authors: Robert Neil F. Leong, Frumencio F. Co, Vio Jianu C. Mojica and Daniel Stanley Y. Tan

Abstract:

Inspired by the capability of exponentially-weighted moving average (EWMA) charts to balance sensitivity and false alarm rates, we propose one for zero-truncated Poisson processes. We present a systematic design and analytic framework for implementation. Further, we add a fast initial response (FIR) feature which ideally increases sensitivity without compromising false alarm rates. The proposed charts (basic and with FIR feature) were evaluated based on both in-control average run length (ARL0) to measure false alarm rate and out-of-control average run length (ARL1) to measure sensitivity to detect unwanted shifts. The evaluation process used a Markov chain intensive simulation study at different settings for different weighting parameters (ω). Empirical results suggest that for both scenarios, the basic chart had: (1) exponentially increasing ARLs as a function of the chart threshold L; and (2) ARLs were longer for smaller ωs. Moreover, the added FIR feature has indeed improved ARL1 within the range of 5% - 55%, resulting to quicker shift detections at a relatively minimal loss in ARL0. These results were also compared to Shewhart and CUSUM control charts at similar settings, and it was observed that the EWMA charts generally performed better by striking a balance between higher ARL0 and lower ARL1. These advantages of the EWMA charts were more pronounced when larger shifts in the parameter λ happened. Finally, a case application in monitoring hospital surgical out-of-controls is presented to demonstrate its usability in a real-world setting.

Keywords: exponentially-weighted moving average control chart, zero-truncated Poisson process, fast initial response feature, average run length, infection control

Download this article:

Year: 2017       Vol.: 66       No.: 1      


Record ID: 90    [ Page 1 of 2, No. 70 ]

Spatial-Temporal Models and Computational Statistics Methods: A Survey

Authors: Erniel B. Barrios and Kevin Carl P. Santos

Abstract:

We introduce panel models and identify its link to spatial-temporal models. Both models are characterized and differentiated through the variance-covariance matrix of the disturbance term. The resulting estimates or tests are as complicated as the nature of the said variance-covariance matrix. Some iterative methods typically used in computational statistics are also presented. These methods are used in conducting statistical inference for spatial-temporal models.

Keywords: panel data, spatial-temporal model, forward search algorithm, additive models, backfitting algorithm, isotonic regression

Download this article:

Year: 2017       Vol.: 66       No.: 1      


Record ID: 89    [ Page 1 of 2, No. 71 ]

A Sustainability Model for Small Health Maintenance Programs

Authors: Mia Pang Rey and Ivy D.C. Suan

Abstract:

The objective of this paper is to present a theoretical model that can assist community-based health maintenance providers in handling their actuarial risk. It determines the factors and conditions under which the said model can be made financially sustainable. The break-even formulas for some of the parameters are derived. It likewise examines the amount of reserves needed to manage underwriting risk.

Keywords: health maintenance programs, sustainability

Download this article:

Year: 2016       Vol.: 65       No.: 2      


Record ID: 88    [ Page 1 of 2, No. 72 ]

Multiple Statistical Tools for Divergence Analysis of Rice (Oryza sativa L.) Released Varieties

Authors: Aldrin Y. Cantila, Sailila E. Abdula, Haziel Jane C. Candalia and Gina D. Balleras

Abstract:

Rice released varieties are genetic resources bulked with good genes. To define the potentials of these germplasm, genetic divergence analysis must be done. The study used different statistical tools such as descriptive statistics, Kolmogorov-Smirnov test, Shannon-Weaver diversity index (H’), correlation statistics (r), principal component analysis (PCA), Dixon’s test and clustering statistics in evaluating 29 NSIC (National Seed Industry Council) released varieties based on 11 morphological traits. Descriptive statistics showed significant differences on the traits used while following a normal distribution. Shannon-Weaver diversity derived a range of 0.55 (number of filled grain per panicle, NFGP) to 0.91 (grain yield, GY and number of tillers, NT) that infer moderate to high diversity traits. Correlation statistics among traits showed a range of r = -0.55 to 0.84 which GY was noted to positively correlate to all traits. PCA accounted 39.95% and 26.10% for PC1 and PC2, respectively. Notable component loading for the yield component traits such as panicle weight (PW) showed the highest contributor of positive projections in two PCs that explained 66.05% of the variation. PCA also detected two latent traits such GY and spikelet fertility (SF) as confirmed in Dixon’s test where outlier was found in SF and to yield contributing traits. Clustering statistics separated varieties into 5 clusters with a range of 5.88 to 106.22 euclidean distance (ED). Among the clusters, 5th cluster composed of one variety, NSIC Rc240 gave the highest GY (7.07 tha-1), NFGP (152.67), one thousand grain weight (24.77 g), PW (5.08 g) and spikelet number per panicle (185.33). The variety could potentially be adapted and a good source of genes for rice improvement localize at General Santos City.

Keywords: clustering statistics, correlation statistics, descriptive statistics, Shannon-Weaver index, rice released varieties, principal component analysis

Download this article:

Year: 2016       Vol.: 65       No.: 2      


Record ID: 87    [ Page 1 of 2, No. 73 ]

Linear Discriminant Analysis vs. Genetic Algorithm Neural Network with Principal Component Analysis for Hyperdimensional Data Analysis: A study on Ripeness Grading of Oil Palm (Elaeis guineensis Jacq.) Fresh Fruit

Authors: Divo Dharma Silalahi, Consorcia E. Reaño, Felino P. Lansigan, Rolando G. Panopio and Nathaniel C. Bantayan

Abstract:

Using Near Infrared Spectroscopy (NIRS) spectral data, the Linear Discriminant Analysis (LDA) performance was compared with the Genetic Algorithm Neural Network (GANN) to solve the classification or assigning problem for ripeness grading of oil palm fresh fruit. The LDA is known as one of the famous classical statistical techniques used in classification problem and dimensionality reduction. The GANN is a modern computational statistical method in terms of soft computing with some adaptive nature in the system. The first four new components variables as result of Principal Component Analysis (PCA) also were used as input variables to increase the efficiency and made the data analysis process faster. Based on the results, both in training and validation phase GANN technique had lower Mean Absolute Error (MAE) and Root Mean Square Error (RMSE), higher percentage of correct classification and suitable to handle large amount of data compared to the LDA technique. Therefore, the GANN technique is superior in terms of precision and less error rate to handle a hyperdimensional problem for data analysis in ripeness classification of oil palm fresh fruit compared to the LDA.

Keywords: Near Infrared Spectroscopy, Neural Network, Genetic Algorithm, Linear Discriminant Analysis, Principal Component Analysis, Oil Palm, Ripeness

Download this article:

Year: 2016       Vol.: 65       No.: 2      


Record ID: 86    [ Page 1 of 2, No. 74 ]

Quantile and Restricted Maximum Likelihood Approach for Robust Regression of Clustered Data

Authors: May Ann S. Estoy and Joseph Ryan G. Lansangan

Abstract:

Quantile regression and restricted maximum likelihood are incorporated into a backfitting approach to estimate a linear mixed model for clustered data. Simulation studies covering a wide variety of scenarios relating to clustering, presence of outliers, and model specification error are conducted to assess the performance of the proposed methods. The methods yield biased estimates yet high predictive ability compared to ordinary least squares and ordinary quantile regression.

Keywords: linear mixed models; quantile regression; restricted maximum likelihood; backfitting; bootstrap; clustered data

Download this article:

Year: 2016       Vol.: 65       No.: 2      


Record ID: 85    [ Page 1 of 2, No. 75 ]

Nonparametric Hypothesis Testing for Isotonic Survival Models with Clustering

Authors: John D. Eustaquio

Abstract:

A nonparametric test for clustering in survival data based on the bootstrap method is proposed. The survival model used considers the isotonic property of the covariates in the estimation via the backfitting algorithm. Assuming a model that incorporates the clustering effect into the piecewise proportional hazards model, simulation studies indicate that the procedure is correctly-sized and powerful in a reasonably wide range of scenarios. The test procedure for the presence of clustering over time is also robust to model misspecification.

Keywords: Bootstrap confidence interval; Survival Analysis; Clustered Data; Backfitting Algorithm; Generalized Additive Models; Nonparametric bootstrap.

Download this article:

Year: 2016       Vol.: 65       No.: 2      


Record ID: 84    [ Page 1 of 2, No. 76 ]

Semiparametric Probit Model for High-dimensional Clustered Data

Authors: Daniel R. Raguindin and Joseph Ryan G. Lansangan

Abstract:

A semiparametric probit model for high dimensional clustered data and its estimation procedure are proposed. The model is characterized by flexibility in the model structure through a nonparametric formulation of the effect of the predictors on the dichotomous response and a parametric specification of the inherent heterogeneity due to clustering. The predictive ability of the model is further investigated by looking at possible factors such as dimensionality, presence of misspecification, clustering, and response distribution. Simulation studies illustrate the advantages of using the proposed model over the ordinary probit model even in low dimensional cases. High predictive ability is observed in high dimensional cases especially when the distribution of the response categories is balanced. Results show that cluster distribution and functional form of the response variable do not affect the performance of the model. Also, the predictive ability of the proposed estimation increases as the number of clusters increases. Under the presence of misspecification, the predictive ability of the model is slightly lower yet remains better than the ordinary probit model.

Keywords: probit model, high dimensional data, backfitting algorithm, local scoring algorithm

Download this article:

Year: 2016       Vol.: 65       No.: 2      


Record ID: 97    [ Page 1 of 2, No. 77 ]

SPCR-Based Control Chart for Autocorrelated Processes with High Dimensional Exogenous Variables

Authors: Paul Eric G. Abeto and Joseph Ryan G. Lansangan

Abstract:

Monitoring processes in an industry is one means to ensure the quality of goods produced or services provided. Control charts are constructed by estimating control limits wherein the process could be identified as stable. The estimation is made by analyzing the behavior of the monitored process. However, the assumptions of uncorrelatedness and normality of the measurements, common in most control charts, are sometimes uncharacteristic of the monitored process. Also, data from other variables may be available and may provide meaningful information on the behavior of the monitored process, and thus may be valuable in the estimation of the control limits. In this paper, a methodology of using sparse principal component regression from high dimensional exogenous variables to estimate control limits of autocorrelated processes is proposed. Simulations are made to further study different scenarios that may affect the proposed estimation. The false alarm rate, average run length during stable periods, and first detection rate upon structural change are used as key indicators for characterization and/or comparison. Simulation results suggest that modelling a process using high dimensional exogenous variables through sparse principal components creates better estimation of its corresponding control chart parameters. False alarm rates and average run lengths were comparable with the Exponentially Weighted Moving Average (EWMA) control chart. Also, faster identification of structural change was observed potentially due to the fact that the process is modelled in terms of other information carried by the exogenous variables.

Keywords: Control chart, autocorrelated process, high dimensional data, sparse principal component regression

Download this article:

Year: 2016       Vol.: 65       No.: 2      


Record ID: 83    [ Page 1 of 2, No. 78 ]

Small Area Estimation with Spatiotemporal Mixed Model

Authors: Divina Gracia L. Del Prado and Erniel B. Barrios

Abstract:

A spatiotemporal model with nested random effects is proposed for small area estimation where sample data are generated from a rotating panel survey. Two methods of estimation are introduced, integrating the backfitting algorithm and bootstrap procedure in two different approaches. Simulation study shows superior predictive ability of the fitted model. The small area estimation methods also produced efficient estimates of parameters in a wide class of population scenarios. The model-based small area estimation procedure is also better over the design-based approach in estimating unemployment rate from the Philippine Labor Force Survey.

Keywords: spatiotemporal mixed model; small area estimation; backfitting algorithm; bootstrap.

Download this article:

Year: 2016       Vol.: 65       No.: 2      


Record ID: 82    [ Page 1 of 2, No. 79 ]

Interdependence of Philippine Stock Exchange Sector Indices: Evidence of Long-run and Short-run Relationship

Authors: Karl Anton M. Retumban

Abstract:

The interdependence of the Philippine Stock Exchange Sector Indices was analyzed using Johansen’s Cointegration test, Granger-Causality and Forecast Error Variance Decomposition. Daily, weekly and monthly data were used from January 2006 up to June 2015.The results confirm existence of cointegration among the six sector indices implying that the indices follow a common trend and have a long-run relationship. This is true across the daily, weekly and monthly data. There is also a uni-directional causality existing among the sector indices. Aside from the sector indices own shock largely influencing its own variation, the innovations from the financial sector index significantly contributes to the variation of other sector indices.

Keywords: Johansen’s Cointegration, Granger Causality, Forecast Error Variance Decomposition, Philippine Stock Exchange Sector Indices

Download this article:

Year: 2016       Vol.: 65       No.: 1      


Record ID: 81    [ Page 1 of 2, No. 80 ]

Drivers of Household Income Distribution Dynamics in the Philippines*

Authors: Arturo Martinez Jr., Mark Western, Wojtek Tomaszewski, Michele Haynes, Maria Kristine Manalo, and Iva Sebastian

Abstract:

Using counterfactual simulations, we investigate the various factors that could explain the changes observed in poverty and inequality in the Philippines over the past decade. To do this, we decomposed per capita household income as a stochastic function of various forms of socio-economic capital and the socio-economic returns to capital. The results indicate that the higher levels of ownership of assets and higher economic returns to formal and non-agricultural employment have contributed to lower poverty while human capital and access to basic services remain stagnant and thus, had no impact on poverty and inequality. In general, we find that the impact of changes in socio-economic capital and changes in economic returns to capital as offsetting forces that contribute to slow poverty and inequality reduction despite the rapid economic growth that the Philippines has experienced over the past ten years.

Keywords: income decomposition, counterfactual simulation, poverty, inequality

Download this article:

Year: 2016       Vol.: 65       No.: 1      


Record ID: 80    [ Page 1 of 2, No. 81 ]

Purposive Sampling in the Analysis of Count Data

Authors: Paolo Victor T. Redondo

Abstract:

Purposive sampling is a non-probability sampling method which is oftentimes used whenever random/probability sampling is not efficient, too costly (either in finance or time) and not feasible. Also, most of the data collected for studies in the present time exhibit the property of count and thus, analysis of such data needs the appropriate tool; commonly the Poisson Regression. The goal of this study is to determine whether the relative location-based purposive sampling can improve the estimates produced by the Poisson regression and if the proposed sampling procedure can reduce the required sample size to have a more efficient and good quality results simultaneously. Simulation of different scenarios are done and several possible partitions (based on relative location) from where the sample will come from are considered. Some partitions are deemed to work better even for small sample size, say 50, while others work as good as their respective simple random sample counterparts.

Keywords: purposive sampling, poisson regression, sample size

Download this article:

Year: 2016       Vol.: 65       No.: 1      


Record ID: 79    [ Page 1 of 2, No. 82 ]

AR-Sieve-based Prediction Interval for Sustainable Development Index

Authors: Jachelle Anne Dimapilis

Abstract:

We propose a procedure for monitoring progress of sustainable development measured by indices. AR-sieve-based nonparametric prediction interval is constructed to determine whether the movement of the indices is significant or not. Points outside the interval are considered significant and imply positive or negative movement of the indices. This method is used in the construction of prediction interval for sustainable development index for the Philippine. The interval is indeed capable of detecting significant movements that can be explained by policies and other factors.

Keywords: sustainable development index, AR-sieve bootstrap, nonparametric prediction interval

Download this article:

Year: 2016       Vol.: 65       No.: 1      


Record ID: 78    [ Page 1 of 2, No. 83 ]

Value-at-Risk Estimates from a SETAR Model

Authors: Joselito C. Magadia

Abstract:

A self-exciting threshold autoregressive (SETAR) model will be fitted to PSEi and value-at-risk estimates would be computed. Backtesting procedures would be employed to assess the accuracy of the estimates and compared with estimates derived from two other approaches to VaR estimation.

Keywords: threshold models, backtesting, APARCH

Download this article:

Year: 2016       Vol.: 65       No.: 1      


Record ID: 77    [ Page 1 of 2, No. 84 ]

Comparison of Ordinal Logistic Regression with Tree-Based Methods in Predicting Socioeconomic Classes in the Philippines

Authors: Michael Daniel C. Lucagbo

Abstract:

The task of classifying Philippine households according to their socioeconomic class (SEC) has been tackled anew in a collaborative work between the Marketing and Opinion Research Society of the Philippines (MORES), the former National Statistics Office (NSO) and the University of the Philippines School of Statistics. This new system of classifying Philippine households has been introduced in the 12th National Convention on Statistics, in a paper entitled 1SEC 2012: The New Philippine Socioeconomic Classification. To predict the SEC of a household, certain household characteristics are used as predictors. The 1SEC Instrument, whose scoring system is based on the ordinal logistic regression model, is then used to predict the household’s SEC. Recently, the statistical literature has seen the development of novel tree-based learning algorithms. This paper shows that the ordinal logistic regression model can still classify households better than three popular tree-based statistical learning methods: bootstrap aggregation (or bagging), random forests, and boosting. In addition, this paper identifies which clusters are easier to predict than others.

Keywords: socioeconomic classification, ordinal logistic regression, bagging, random forests, boosting

Download this article:

Year: 2016       Vol.: 65       No.: 1      


Record ID: 76    [ Page 1 of 2, No. 85 ]

The Recursive Alpha (RAlph) Coefficients: Quantifying Inter-Item Cohesion under Indirect Range Restriction

Authors: Michael Van B. Supranes; John Francis J. Guntan; Joy Pauline Adrienne C. Padua; Joseph Ryan G. Lansangan

Abstract:

Range restriction is a known cause of underestimation in the Cronbach’s Alpha reliability coefficient. The estimate of the Cronbach’s Alpha is usually adjusted to minimize bias, but existing methods require information about the population. In the case of indirect range restriction however, such information may not be readily or intuitively available. A data-driven bootstrap-based estimator that requires minimal assumptions about the unrestricted population, called the Recursive Alpha (RAlph) coefficient, is therefore proposed. Based on the simulation studies, the two versions of the Ralph coefficient perform best when the information associated to the range restriction is strongly correlated with the characteristic being measured, and when the true reliability coefficient Alpha is high. Also, the RAlph coefficients are found to be effective in minimizing the error in estimating Alpha under strong presence of range restriction. Moreover, considerations on the length of the instrument, scale of the responses, and sample size aid in minimizing the error of the proposed coefficients. In support of the simulation results, an empirical study using behavioral data on social media users is carried out, and evidently, the RAlph coefficients are far better than the ordinary Cronbach’s Alpha estimate.

Keywords: Range Restriction, Adjusted Cronbach’s Alpha, Bootstrap Sampling

Download this article:

Year: 2015       Vol.: 64       No.: 2      


Record ID: 75    [ Page 1 of 2, No. 86 ]

Incidence of Crimes and Effectiveness of Interventions in the National Capital Region: Evidence from Panel Data

Authors: Michael Daniel C. Lucagbo; Lianne S. De La Cruz; Jecca V. Narvasa; Micah Jane A. Paglicawan

Abstract:

Efforts to bring down the incidence of crimes have been intensified by the Philippine National Police (PNP). The index crimes are prioritized among these crimes. These crimes include theft, robbery, carnapping, and motornapping. Interventions to bring down the incidence of crimes have recently been enacted by the National Capital Region Police Office (NCRPO) of the PNP. These interventions include increases in number of police personnel, mobile patrols, beat patrols, and checkpoints. In this study, the effect of each of these interventions is examined in a panel data analysis using weekly data gathered from all of the police stations in NCR. This paper performs a district-level analysis of the crimes and interventions. The negative binomial regression model for panel data is used to quantify the effects of the interventions on the incidence of index crimes. Results show that some (but not all) of these interventions are effective in reducing crime. The results also show differences in the effects of the interventions across for the different districts. Resources should thus be redirected towards these effective strategies. The differences in the effects of the interventions among the different crimes are also studied.

Keywords: index crimes, intervention, panel data, negative binomial regression

Download this article:

Year: 2015       Vol.: 64       No.: 2      


Record ID: 74    [ Page 1 of 2, No. 87 ]

Bootstrapping Penalty Analysis in Sensory Evaluation of Pizza Products

Authors: Catherine Estiaga

Abstract:

Penalty analysis is a popular method used to evaluate data from sensory evaluation using the Just About Right Scale and the Hedonic Scale. Although the test estimates the mean drops for the “Too Little” and “Too Much” categories of product attributes, penalty analysis does not provide information that can be used to test the effect of each attribute on the overall liking score. Bootstrap resampling method when used together with penalty analysis estimates the standard error of the mean drops and allows to test for the significance. This method is used in product testing of pizza products.

Keywords: bootstrap method, penalty analysis, just about right scale, hedonic scale, mean drops

Download this article:

Year: 2015       Vol.: 64       No.: 2      


Record ID: 73    [ Page 1 of 2, No. 88 ]

Statistical Evaluation of In Vivo Micronucleus Assays in Toxicology

Authors: John Closter F. Olivo

Abstract:

Several alternative statistical procedures have been suggested and published to statistically analyze the incidence of micronucleated polychromatic erythrocytes (MNPCs) among treatment groups, but no standard procedure has been singled out and exclusively recommended. In this study, the potential of TO2 to induce chromosomal damage is tested using both Poisson and quasi-Poisson models for the statistical evaluation of in vivo micronucleus (MN) assay. The genotoxic activity of T02 is assessed in the rodent bone marrow micronucleus test using male mice. Results show that MN frequencies are significantly elevated in mice exposed to any dose level of T02 administered orally in a single frequency of dose. Moreover, results indicate that T02 is tested to be a positive compound under the anticipated condition of the tests used.

Keywords: in vivo; micronucleus, MNPC; Poisson model; quasi-Poisson model; TO2

Download this article:

Year: 2015       Vol.: 64       No.: 2      


Record ID: 72    [ Page 1 of 2, No. 89 ]

Modelling Rice Yield in the Philippines Using Dynamic Spatio-Temporal Models

Authors: Stephen Jun V. Villejo

Abstract:

Recent unpredictable and extreme weather episodes and infestation are some of the realistic occurrences of structural change in the agricultural system which produce outliers and extreme values in our data, and consequently pose problems when building statistical models. An estimation procedure which is robust to structural change is therefore necessary. Three spatial-temporal models with varying dynamic characteristics of the parameters are postulated each with a different estimation procedure for the agricultural yield in irrigated areas of the Philippines. One of which is a robust estimation procedure using forward search algorithm with bootstrap in a backfitting algorithm. The other two algorithms also used the backfitting algorithm but infused with the Cochranne-Orcutt procedure. The robust estimation procedure and the other one which considers varying parameter across space gave competitive predictive abilities and are better than the ordinary linear model. Simulation studies show the superiority of the robust estimation procedure over the Cochranne-Orcutt procedure and ordinary linear model in the presence of structural change.

Keywords: Spatial-temporal model; Backfitting algorithm; robust estimation; additive model

Download this article:

Year: 2015       Vol.: 64       No.: 2      


Record ID: 71    [ Page 1 of 2, No. 90 ]

Some Zero Inflated Poisson-Based Combined Exponentially Weighted Moving Average Control Charts for Disease Surveillance

Authors: Robert Neil F. Leong; Frumencio F. Co; Daniel Stanley Y. Tan

Abstract:

One of the main areas of public health surveillance is infectious disease surveillance. With infectious disease backgrounds usually being more complex, appropriate surveillance schemes must be in order. One such procedure is through the use of control charts. However, with most background processes following a zero-inflated Poisson (ZIP) distribution as brought about by the extra variability due to excess zeros, the control charting procedures must be properly developed to address this issue. Hence in this paper, drawing inspiration from the development of combined control charting procedures for simultaneously monitoring each ZIP parameter individually in the context of statistical process control (SPC), several combined exponentially weighted moving average (EWMA) control charting procedures were proposed (Bernoulli-ZIP and CRL-ZTP EWMA charts). Through an extensive simulation study involving multiple parameter settings and outbreak model considerations (i.e., different shapes, magnitude, and duration), some key results were observed. These include the applicability of performing combined control charting procedures for disease surveillance with a ZIP background using EWMA techniques. For demonstration purposes, application with an actual data, using confirmed measles cases in the National Capital Region (NCR) from Jan. 1, 2010 to Jan. 14, 2015, revealed the comparability of the Bernoulli-ZIP EWMA scheme to historical limits method currently in use.

Keywords: EWMA control charts, disease surveillance, ZIP distribution, measles

Download this article:

Year: 2015       Vol.: 64       No.: 2      


Record ID: 70    [ Page 1 of 2, No. 91 ]

Nonparametric Bootstrap Test in a Multivariate Spatial-Temporal Model: A Simulation Study

Authors: Abubakar S. Asaad; Erniel B. Barrios

Abstract:

The assumptions of constant characteristics across spatial locations and constant characteristics across time points facilitates estimation in a multivariate spatial-temporal model. A test based on the nonparametric bootstrap in proposed to verify these assumptions. The simulation studies confirm that the proposed test procedures are powerful and correctly sized.

Keywords: coverage probability, robustness, spatial-temporal model

Download this article:

Year: 2015       Vol.: 64       No.: 2      


Record ID: 69    [ Page 1 of 2, No. 92 ]

Statistics for Applied Researchers: Bootstrap to the Rescue

Authors: Nabendu Pal; Suntaree Unhapipat

Abstract:

Availability of latest fast and affordable computing resources has empowered the statisticians tremendously. This has also given the applied researchers a unique edge to extend the frontier of their knowledge-base by taking advantage of sophisticated computational statistical tools where theoretical derivations of complex sampling distributions are often not required or can be bypassed. ‘Bootstrap method’ is one such tool which is being used widely in solving real-life problems that involve statistical inferences. This article is designed to present bootstrap in simple terms for the applied researchers with useful examples and show how it can go a long way in settling contentious issues with reasonably convincing results.

Keywords: Sampling distribution, p-value, nonparametric bootstrap, parametric bootstrap, test statistic.

Download this article:

Year: 2015       Vol.: 64       No.: 1      


Record ID: 68    [ Page 1 of 2, No. 93 ]

Developed Sampling Strategy in Evaluating Teaching Performance Through Student Ratings

Authors: James Roldan S. Reyes; Zita VJ. Albacea

Abstract:

This paper presents an alternative method apart from the current online or electronic approach, which is currently being used by some higher education institutions (HEIs), in administering student ratings for teachers. The developed method still employed the traditional paper approach but has been improved through the use of sampling application which includes sampling design, sample size, estimation technique, and strategic implementation. Three basic sampling designs such as simple random, stratified random, and cluster sampling were applied at three different sampling rates such as 25%, 50%, and 75%. For the empirical evaluation of the developed method, the Student Evaluation of Teachers (SET) of the University of the Philippines Los Baños (UPLB) was utilized using bootstrap resampling technique. Based on findings, stratified random sampling is the most appropriate sampling design to use with 50% of the students for each class section serving as SET evaluators. Results also revealed that bootstrap estimates of standard error are lower than that of the standard error using jackknife resampling procedure. Generally, the improved traditional paper approach same with the electronic approach could reduce the cost of administering student ratings. However, the electronic approach has a dilemma with regards to high non-response bias leading to invalid results. Thus, to minimize non-response error of the developed method, its standard protocol to administer the student ratings has been formulated.

Keywords: student ratings, traditional paper approach, sampling application, bootstrap resampling, jackknife resampling, non-response error

Download this article:

Year: 2015       Vol.: 64       No.: 1      


Record ID: 67    [ Page 1 of 2, No. 94 ]

Forecasting Time-Varying Correlation Using the DCC Model

Authors: John D. Eustaquio; Dennis S. Mapa; Miguel C. Mindanao; Nino I. Paz

Abstract:

Hedging strategies have become more and more complicated as assets being traded have become more interrelated to each other. Thus, the estimation of risks for optimal hedging does not involve only the quantification of individual volatilities but also include their pairwise correlations. Therefore a model to capture the dynamic relationships is necessary to estimate and forecast correlations of returns through time. Engle'ss dynamic conditional correlation (DCC) model is compared with other models of correlation. Performance of the correlation models are evaluated in this paper using only the daily log returns of the closing prices from January, 2000 to February, 2010 of the Peso-Dollar Exchange Rate and Philippine Stock Exchange index. Ultimately, Engle's DCC model is adopted because of its consistency with expectations. Though generally negative, correlation between these two returns is not really constant as the results indicated. The forecast evaluation of the models was divided into in-sample and out-of-sample forecast performance with short-term (i.e., 22-day, 60-day, and 125-day) and medium-term (250-day and 500-day) rolling window correlations, or realized correlations, as proxies for the actual correlation. Based on the root mean squared error and mean absolute error, the integrated DCC model showed optimal forecast performance for the in-sample correlation patterns while the mean-reverting DCC model had the most desirable forecast properties for dynamic long-run forecasts. Also, the Diebold-Mariano tests showed that the integrated DCC has greater predictive accuracy in terms of the 3-month realized correlations than the rest of the models.

Keywords: dynamic conditional correlation, Peso-Dollar exchange rate, PSE index, hedging

Download this article:

Year: 2015       Vol.: 64       No.: 1      


Record ID: 66    [ Page 1 of 2, No. 95 ]

Classification and Prediction of Suicidal Tendencies of the Youth in the Philippines: An Empirical Study

Authors: Stephen Jun V. Villejo

Abstract:

This paper investigates suicidal tendencies of youth in the Philippines based on the Young Adult Fertility and Sexuality Study (YAFS) 2002. The main goal of the paper is the classification and prediction of suicidal tendencies using classification algorithms. The different classification algorithms such as Classification and Regression Trees, random forests and conditional inference trees; and the logistic regression have consistent findings on the significant variables affecting suicidal tendencies. Due to the severely unbalanced classes of the response variable, the classification models have very poor predictive ability for the minority class although the over-all classification rate is high. A classification algorithm is proposed which improves the predictive ability in terms of balancing out the correct classification in the two classes of the response variable.

Keywords: classification, suicide, prediction, logistic regression

Download this article:

Year: 2015       Vol.: 64       No.: 1      


Record ID: 65    [ Page 1 of 2, No. 96 ]

Comparison of Tree-Based Methods in Identifying Factors Influencing Credit Card Ownership and Prediction Accuracy

Authors: Karl Anton M. Retumban

Abstract:

Factors influencing credit card ownership were identified using the data from Global Financial Inclusion Index Database of The World Bank and the tree-based methods: CART, boosting, and bagging. The prediction accuracy of the methods was compared in terms of the training and test error rate. Results on the world and Philippine data were compared. The factors influencing consumers to own a credit card are financial account ownership, highest educational attainment and age. This is the case both for the World and Philippine data. For the World data, the factors that influence credit card ownership are financial account ownership, debit card ownership, withdrawal frequency in personal account, highest educational attainment, current loan for home or apartment purchase, age, get cash in ATM and deposit cash in ATM. For the Philippine data, the influential factors to Filipino consumers are financial account ownership, age, income quintile, highest educational attainment, and deposit cash over the counter in branch of bank or financial institution. Among the procedures, boosting has the smallest test error rate while bagging has the largest training and test error rate, both for the world and Philippine data. CART and boosting has the smallest training error rate under the world data and Philippine data respectively.

Keywords: classification and regression trees, boosting, bagging, credit card ownership, Global Financial Inclusion Index Database

Download this article:

Year: 2015       Vol.: 64       No.: 1      


Record ID: 64    [ Page 1 of 2, No. 97 ]

Predicting Socioeconomic Classification in the Philippines: Beyond the Ordinal Logistic Regression Model

Authors: Michael Daniel C. Lucagbo

Abstract:

Socioeconomic classification (SEC) is an important construct to enable one to capture and understand changes in the structure of a society. The 1SEC 2012, a new scheme for identifying the SEC of Philippine households, predicts SEC using information on household characteristics through the ordinal logistic regression model. This study aims to improve the predictive ability of the 1SEC methodology by using state-of-the-art statistical learning techniques: discriminant analysis, support vector machines (SVM), and artificial neural networks (ANN), and thereby suggest a new scheme for predicting SEC. The results show that SVM and ANN exhibit improvements in exact-cluster prediction performance, suggesting alternative methods for predicting SEC.

Keywords: socioeconomic classification, ordinal logistic regression, discriminant analysis, support vector machines, artificial neural networks

Download this article:

Year: 2015       Vol.: 64       No.: 1      


Record ID: 63    [ Page 1 of 2, No. 98 ]

Determinants of income class in Philippine households: Evidence from the Family Income and Expenditure Survey 2009

Authors: Stephen Jun Villejo; Mark Tristan Enriquez; Michael Joseph Melendres; Dexter Eric Tan; Peter Julian Cayton

Abstract:

The government has instituted projects aimed at helping the poor, and has implemented mechanisms to make the services accessible to them. The wisdom of the projects of the government should not be defeated by misidentification of deserving households to enjoy those projects which could be remedied through proper and thorough assessment of their economic status.The study aims to provide a methodology and model for classifying households using demographic and household assets that may be used in identifying recipients of poverty-targeted projects. Cluster analysis was employed to identify household classification using income data from the Family Income and Expenditure Survey 2009. Five income clusters were identified. To study the relationship between the income classes and several predictors of income identified from previous researches, a family of logistic regression models have been utilized, culminating to the generalized logistic regression model. Nine significant predictors were included in the final reduced model. The model is assessed to have good fit via multiple Hosmer and Lemeshow tests. These variables were the following: location of the household whether in NCR or not, or in urban or rural area; education and employment status of the household head; number of cars, air-conditioners, and television sets; and the building type and household type. The sensitivity table suggests that the model is biased towards predicting the lower income classes. The research has identified a viable methodology for classification of income classes for households.

Keywords: mutlinomial logistic model, income determinants, clustering methods

Download this article:

Year: 2014       Vol.: 63       No.: 2      


Record ID: 62    [ Page 1 of 2, No. 99 ]

Determinants of regional minimum wages in the Philippines

Authors: Lisa Grace S. Bersales; Michael Daniel C. Lucagbo

Abstract:

In the Philippines, the National Wages and Productivity Commission (NWPC) formulates policies and guidelines that Tripartite Wage and Productivity Boards use in determining minimum wages in their respective regions. Reviews of the implementation of the minimum wage determination have been done in past studies to determine which of the factors listed by NWPC for consideration by the wage boards are actually used to determine minimum wage. Results indicated that the significant determinant of minimum wage is consumer price index. Two stage least squares estimation of a Fixed Effects Model for Panel Data for the period 1990-2012 showed that significant determinants of regional minimum wage for non-agriculture are: Consumer Price Index, Gross Regional Domestic Product, and April employment rate. The lower and upper estimates from the estimated equation of the Fixed Effects Model for Panel Data may provide intervals that the wage boards can use in making the final determination of minimum wage. The following shocks which would likely introduce abnormal wage setting behavior on the part of the wage boards were not significant: 1997-1998  - Asian Financial Crisis; 2002 - spillover effects from U.S. technology bubble burst; 2008-2009 - spillover effects from Global Financial Crisis.

Keywords: tripartite wage and productivity boards, minimum wage, fixed effects models for panel data, shocks, two stage least squares, fixed effects model for panel data

Download this article:

Year: 2014       Vol.: 63       No.: 2      


Record ID: 61    [ Page 1 of 2, No. 100 ]

The link between expenditure on contraceptives and number of young dependents in the Philippines

Authors: Michael Daniel C. Lucagbo; Genica Peye C. Alcaraz; Kristina Norma B. Cobrador; Elaine Japitana; Gelli Anne Q. Sadsad

Abstract:

The growing population of the Philippines hinders the country from achieving economic development due to the limited resources available. The 2010 Census on Population and Housing (CPH) reports that the Philippine population has struck 92.1 million, a 15.8-million increase from the 76.3 million population size reported in 2000. Moreover, the relationship between population and family size, on the one hand, and poverty incidence on the other, has been established through econometric models showing the causality between presence of young dependents in a household and household welfare. Using the Family Income and Expenditure Survey (FIES) 2009 data, this study examines the factors affecting the number of young dependents in a household, and focuses in particular on the household’s level of contraceptive expenditure. The negative binomial regression model is used to quantify the effect of the factors and predict the average number of young dependents in a household. This model allows for overdispersion in the data. Results show that for every P10,000 increase in total expenditure on contraceptives for a period of six months, the mean number of young dependents decreases by 3.7%. Other demographic variables such as education of household head and income of the household are controlled for in the study.

Keywords: Young dependents, contraceptive expenditure, negative binomial regression, overdispersion

Download this article:

Year: 2014       Vol.: 63       No.: 2      


Back to top