| No. of records per page: 10 | 20 | 30 | 50 | 100 | Show all |
Record ID: 140 [ Page 2 of 8, No. 1 ]
Authors: Mazin M. Alanaz, Nada Nazar Alobaidi and Zakariya Yahya Algamal
Abstract:
The ridge estimator has been consistently demonstrated to be an attractive shrinkage method to reduce the effects of multicollinearity. The logistic regression model is a well-known model in application when the response variable is binary data. However, it is known that multicollinearity negatively affects the variance of maximum likelihood estimator of the logistic regression coefficients. To address this problem, a logistic ridge regression model has been proposed by numerous researchers. In this paper, a modified logistic ridge estimator (MLRE) is proposed and derived. The idea behind the MLRE is to get diagonal matrix with small values of diagonal elements that leading to decrease the shrinkage parameter and, therefore, the resultant estimator can be better with small amount of bias. Our Monte Carlo simulation results suggest that the MLRE estimator can bring significant improvement relative to other existing estimators.
Keywords: multicollinearity, ridge estimator, logistic regression model, shrinkage, Monte Carlo simulation
Year: 2021 Vol.: 70 No.: 2
Record ID: 139 [ Page 2 of 8, No. 2 ]
Authors: Angelo E. Marasigan
Abstract:
A risk measure such as the value-at-risk (VaR) is commonly used by financial institution for capital management and calculation of amount of risk exposure against a loss. The incoherence of VaR leads to the calculation of conditional tail expectation (CTE) for remedy. In this study, formulas for the CTE and the conditional tail variance (CTV) under the class of beta generalized Pareto (bgP) distribution were derived. bgP is used to model the distribution of different scenarios of return of various Philippine stock indices using maximum likelihood estimation due to simulated annealing method with R software, and further used to compute the CTE and CTV of the data of returns according to the specified model. To determine the performance of bgP to model financial data sets, a comparison with its generating distributions which are the (generalized) beta and Pareto models were done. Finally, the method of historical simulation was also done and used to compute the corresponding VaR, CTE, and CTV for comparison to the above method of calculations.
Keywords: risk measure, Beta generalized Pareto distribution, value-atrisk, conditional tail expectation, conditional tail variance, heavy-tailed
Year: 2021 Vol.: 70 No.: 1
Record ID: 138 [ Page 2 of 8, No. 3 ]
Authors: Ramoncito G. Cambel and Zita VJ. Albacea
Abstract:
The Philippine Cities and Municipalities Competitiveness Index (CMCI) by the National Competitiveness Council (NCC) measures the local government unit’s competitiveness. This paper presents the results of a study that employed an alternative weighting method for the indicators through a statistical technique called principal component analysis and factor analysis. Moreover, the study utilized both national census data and administrative data from the government. Four factors were extracted from the available data. The four factors’ compositions are: “Housing and Household Characteristics, Financial Institutions and Capacity of Government for Health Services,†“Establishments for Tourists Accommodation,†Cost of Laborâ€, and “Capacity of Local Government to Deliver Services.†Unlike the distribution of the NCC’s 2016 competitiveness level of municipalities and cities which is symmetric, the distribution of the proposed alternative competitiveness index is positively skewed. This suggests that only a few municipalities and cities can be considered highly competitive. Moran’s I of 0.3457 proves that there exists positive spatial autocorrelation on the competitiveness level of Philippine municipalities and cities. Municipalities and cities in the Provinces of Bulacan, Pampanga, Cavite, Laguna, Rizal, and Cebu, and also in the National Capital Region are generally identified as those in the highhigh cluster in terms of the proposed alternative 2016 competitiveness level. In contrast, municipalities and cities in the Provinces of Mindoro, Romblon, and Surigao del Norte, and those in the Cordillera Autonomous Region, Ilocos Region, Cagayan Valley Region, and Eastern Visayas Region are generally considered as those in the low-low cluster in terms of the proposed alternative 2016 competitiveness level. Statistical properties of the index such as consistency, accuracy, and precision were then assessed using the bootstrap resampling technique. Results showed that the proposed alternative CMCI is said to be unbiased and mean square error consistent. This indicates the proposed CMCI could be used as an alternative Municipality and City Level Competitiveness Index for the Philippines. This index concentrated on four factors with 33 indicators. These factors are weighted in the index based on their importance in their contribution to the variability of the index.
Keywords: statistical index, spatial analysis, factor analysis
Year: 2021 Vol.: 70 No.: 1
Record ID: 137 [ Page 2 of 8, No. 4 ]
Authors: Varun Agiwal and Jitendra Kumar
Abstract:
In time series, present observation not only depend upon own past observation(s) but also involve other explanatory or exogenous variables. These variables are not continuously influence or impact for long run and may be removed or discontinued or merger and acquisition (M&A) because its effect may be reduced due to less significant correlation. The M&A theory is developed when one or more variables are not meet out the required circumstance to survive in the system. To analyze the performance of M&A concept, this study proposes a merged autoregressive (M-AR) model for examining the impact of merger into the parameters as well as acquired series. Bayesian approach is considered for parameter estimation under different loss functions and compared with least square estimator. To test the presence/association of merger series in the acquire series, Bayes factor, full Bayesian significance test and posterior probability based on credible interval are derived. A simulation study and an empirical application of banking indicators for Indian Banks are carried out to evaluate the performance of the proposed model. The study concludes that proposed time series models solved the problems of discontinuity in the series and also able to manage model statistically.
Keywords: autoregressive model, Bayesian inference, merger and acquisition series, Indian bank
Year: 2021 Vol.: 70 No.: 1
Record ID: 136 [ Page 2 of 8, No. 5 ]
Authors: R. Maya, M.R. Irshad, and S.P. Arun
Abstract:
In this work, we have considered a new lifetime distribution, the Bilal distribution and derived the best linear unbiased estimator (BLUE) of the scale parameter of the Bilal distribution based on order statistics. We further estimate the scale parameter of Bilal distribution by U-statistics based on best linear functions of order statistics as kernels. The efficiency of the proposed estimator relative to the minimum variance unbiased estimator (MVUE) have also been evaluated.
Keywords: Order statistics, Bilal distribution, best linear unbiased estimator, U-statistics
Year: 2021 Vol.: 70 No.: 1
Record ID: 135 [ Page 2 of 8, No. 6 ]
Authors: Ali Reza Soltanian, Hassan Ahmadinia, and Ghodratollah Roshanaei
Abstract:
Non-compliance is a common deviation from randomized clinical trials protocol. Standard approaches for comparing the effects of drugs in randomized clinical trials in the presence of non-compliance are intention-to-treat, as-treated and per-protocol analysis. Each of these approaches has disadvantages when evaluating the effect of medication in present of non-compliance. The current study compared the accuracy of instrumental variable (IV), intention-to-treat, as-treated and perprotocol technique. We assumed that non-compliance occurred for some patients in the new treatment group only, and independent of the patient outcomes. To compare these techniques, various scenarios were simulated. The MSE value for both PP and IV models changes only under the influence of the value of w (non-compliance ratio). That is, at all values of θ (treatment effect), the MSE of these two models increases with increasing non-compliance ratio, and changing the value of θ does not affect the MSE. The MSE value for the AT model if the non-compliance occurs only in the intervention group, this value changes only under the influence of the w value That is, in this case, in all values of θ, the MSE value increases with increasing non-compliance ratio, and changing the value of θ does not affect the MSE. But the MSE value of the ITT model is strongly influenced by the value of θ. At low θ values the MSE value of this model is lower than other methods and better estimates the therapeutic effect, and in this case with increasing the w, the MSE value increases very little. But as the θ increases, so does the MSE value, and in this case, as the w increases, the MSE value increases sharply.
Keywords: causal model, non-compliance, randomized clinical trials, simulation
Year: 2021 Vol.: 70 No.: 1
Record ID: 134 [ Page 2 of 8, No. 7 ]
Authors: Shashi Bhushan, Anoop Kumar, and Saurabh Singh and Sumit Kumar
Abstract:
In this article, we consider an improved class of estimators of population mean using additional information under simple random sampling (SRS). The expressions of bias and mean square error of the proposed class of estimators are obtained up to first order of approximation. In addition, some well-known estimators have been identified as particular member of the proposed class of estimators. The theoretical results are established and empirical study has been carried out using real and simulated data sets. The findings appear to be rather satisfactory showing better improvement over the existing estimators.
Keywords: simple random sampling, mean square error, efficiency
Year: 2021 Vol.: 70 No.: 1
Record ID: 133 [ Page 2 of 8, No. 8 ]
Authors: Nelda A. Nacion and Arturo Y. Pacificador
Abstract:
The main purpose of this study is to propose an alternative procedure in the generation of small area estimates of poverty incidence using imputation-like procedures coupled with a calibration of estimates to ensure coherence in the regional estimates. Specifically, this study applied Deterministic Regression Approach, Stochastic Imputation-like procedure similar to Stochastic Regression, and applied the calibration techniques to ensure that the small area estimates conform to the known regional estimates. The difference of this methodology as compared to the ELL is that the error terms used for predictions is based on the empirical distribution of such residuals and thereby a protection against misspecification of the error model. At the same time, the procedure is simpler with available computing resources. In addition, the proposed methodology only utilized data from the census short form which is a 100% percent sample. Thus, eliminating another source of variation as compared to using the census long-form which is collected from a sample of households. This study used the Family Income and Expenditure Survey (FIES) of 2009 and the Census of Population and Housing (CPH form 2) 2010 to come up with reliable estimates of poverty incidence by municipal level. Since the CPH is conducted in the Philippines every 10 years, the CPH 2010 is the latest data that was used. The researcher was able to produce small area estimates of poverty in the Philippines at municipal level by combining survey data with auxiliary data derived from census. The study fitted different models for each region. Considering the results of this paper, the following conclusions were derived: The Stochastic Regression Imputation (SRI) is better to use as compared to Deterministic Regression Imputation (DRI) in attaching income to CPH. The SRI was able to preserve the distribution of 82% of the total number of regions. The DRI was able to preserve only 10.22% of the validation sets. Since the error in fitting the DRI in CPH does not follow a well-known distribution (such as the Normal distribution), the non-parametric way of estimating error was used to generate the errors attached in SRI. The technique is called Kernel Density Estimation (KDE) or the histogram method, which was found to be effective in using the SR. Using the calibration technique achieved municipal estimates that conforms to the regional estimates. The estimates of the poor households in CPH reflects the bottom 30% of the wealth index.
Keywords:
Year: 2021 Vol.: 70 No.: 1
Record ID: 132 [ Page 2 of 8, No. 9 ]
Authors: Christian P. Umali and Felino P. Lansigan
Abstract:
A composite provincial level food security index can measure food security at the sub-national level which can be helpful in policy making. However, it can be non-robust due to the uncertainties involved in the choice of input factors to be used in its construction, namely: different sources of data, normalization methods, weighting schemes, aggregation systems, and the level of importance placed on the different dimensions of food security i.e. availability, accessibility, utilization, and stability. In this study, uncertainty analysis technique was employed in order to assess the robustness of the index constructed through the conventional approach. Sensitivity analysis was done in order to quantify the influence each factor has in the index building process, identify factors that should be prioritized and can be fixed in the successive development of the index, and determine which levels of factors are responsible for producing the desirable model outcomes. The study had been exploratory in considering the ratio with mean normalization technique and a combination of the additive and geometric aggregation methods as potential inputs in the index construction. In computing for the Sobol’ sensitivity indices, the formulas suggested by Jansen et al. (1994) and Nossent and Bauwens (2012) were investigated and compared under varying sample sizes. The results can provide a more appropriate choice of procedure in computing for the Sobol’ sensitivity indices at an optimum sample size, and likewise insights in the future development of a uniform and more defensible composite provincial level food security index.
Keywords: uncertainty analysis, sensitivity analysis, Sobol’ sensitivity index, Monte Carlo integral, composite index, food security
Year: 2020 Vol.: 69 No.: 2
Record ID: 131 [ Page 2 of 8, No. 10 ]
Authors: N. A. Ikoba and E. T. Jolayemi
Abstract:
The declining fortunes of some of Nigeria’s indigenous languages are examined in this paper. A multi-dimensional indigenous language-use questionnaire was constructed to elicit data through a survey carried out in some Nigerian cities. The aim was to acquire relevant data on indigenous language ability and possible causes of language-use decline. The results from the survey showed that there is a low level of indigenous language literacy among most of the languages surveyed. The proportion of language use at home was also seen to be generally low for most of the surveyed languages, below the 70% threshold for virile languages. Several reasons were adduced for the non-transfer of indigenous language ability from parent to children and tests of statistical independence carried out showed that the respondents’ perception that their language is inferior to English, belief that the child will be limited in school, negligence and inability of parents to speak their heritage language were the major reasons adduced for the decline. A logistic regression analysis of the data also showed that acquisition of language literacy depended on a person’s place of childhood, age, level of education, frequency of use at home and the indigenous language spoken by the person’s mother.
Keywords: indigenous languages, language literacy, test of independence, intergenerational transmission, logistic regression
Year: 2020 Vol.: 69 No.: 2
Record ID: 130 [ Page 2 of 8, No. 11 ]
Authors: Manuelito De Vera Bengo
Abstract:
Management Competency Sub-scales (MCS), in this current inquiry was being constructed under the theoretical assumption of a sixfold structure: self-image; leadership; skills; action; performance; and orientation. However, there is so far no empirical evidence available to support this assumption. Thus, a comprehensive measure is therefore needed to adequately gauge its validity. The aim of this present study was to assess the validity of the categorization of management competencies into the six overarching MCS. The results showed that instead of the expected six-fold structure, MCS comprised eight factors: expertise; self-image; skills; leadership; innovation; influencer, sustainability; and orientation. Looking at homogeneity and scale length in tandem, the scales were subsequently refined and the number of items reduced to 45. The inter-correlations between the derived sub-scales, as well as the mean loadings of the items on the sub-scales, were significant, indicating the validity of the construct. The Cronbach alphas for the different sub-scales were found to be of an acceptable level, above .7 suggesting relative stability of the derived scales. Comparing the inter-scale correlations of the management competencies sub-scales with their average Cronbach alpha, the values were found to be substantially different, providing support for the discriminant validity of the construct. Furthermore, conducting a second-order factor analysis, all the sub-scales loaded above .3 on the one extracted, suggesting convergent validity. The study provides a good alternative to the bounty of competency models/frameworks that have been developed in the area of management competency.
Keywords: management competency sub-scales, factor analysis, validity, cronbach alpha, inter-correlations
Year: 2020 Vol.: 69 No.: 2
Record ID: 129 [ Page 2 of 8, No. 12 ]
Authors: Melissa Jane Siy and Francisco N. de los Reyes
Abstract:
This study analyzed and validated the statistical aspect of the nonparametric continuous norming technique, which is a method used in creating scores in psychometric tests. Using the Work Profile Questionnaire - Emotional Intelligence (WPQei) with Filipino sample respondents, the study was able to demonstrate how the norming technique can be used to create age-group-based scores (age norms). Based on the results, the models from the technique can produce useable scores in practice with acceptable adjusted R-squared values; however, some challenges emerged with regard to the process of choosing the smoothing parameter, consistency of the significant variables and coefficient signs of the model, and the calculation of the score tables. Bootstrapping is recommended in improving the robustness of the technique.
Keywords: norming, WPQei, age-group-based score tables, bootstrap
Year: 2020 Vol.: 69 No.: 2
Record ID: 128 [ Page 2 of 8, No. 13 ]
Authors: Nehemiah A. Ikoba
Abstract:
In this paper, a sequential Markov chain conceptualization of the winners of the FIFA World Cup is presented. The aim was to capture the dynamics of the World Cup and predict the future winner via Markov chain analysis. A sequentially incremented state-space Markov chain is used to approximate the process of winning the FIFA World Cup. The corresponding Markov chains at every epoch where the state space increases were computed. The result of the analysis showed a close predictive ability of the model to predict the previous World Cup winners. It is predicted that a new winner may emerge in the 2022 World Cup in Qatar. However, if a new winner does not emerge, on the basis of both the sequential Markov chain and the first passage matrix of the conceptualized model, then Brazil is the most probable winner of the 2022 World Cup, followed by Italy and Germany. The sequential Markov chain approach can be applied to other sporting events and scenarios in which there is only a small probability that the number of observed states may increase from a small set of states.
Keywords: stochastic processes, sports analytics; transition probability matrix, mean first passage time, mean recurrence time
Year: 2020 Vol.: 69 No.: 2
Record ID: 127 [ Page 2 of 8, No. 14 ]
Authors: A. Bachir and K. Djeddour-Djaballah
Abstract:
In this paper, we propose an estimate of a quantile of an unknown population. By considering this problem as a stochastic approximation problem, we obtain an estimator of the quantile and provide the almost-sure convergence as well as the asymptotic normality of this estimator. Some simulation results are presented to show that the proposed estimator works well.
Keywords: stochastic approximation, non-parametric estimation, quantile estimation
Year: 2020 Vol.: 69 No.: 1
Record ID: 126 [ Page 2 of 8, No. 15 ]
Authors: Reanne Len C. Arlan, James Roldan S. Reyes, and Mary Denelle C. Mariano
Abstract:
Penalty analysis is the most widely accepted approach in consumer oriented testing for sensory evaluation dealing with just about right (JAR) data. However, performing test of significance of penalizing sensory attribute on the overall acceptability of a product is limited due to the absence of mean drop’s standard error. To address this limitation, this paper applied alternative approaches using bootstrap and jackknife resampling. Using sensory evaluation data of the soft serve vanilla-based ice cream of a certain fast food company, the ordinary and the proposed jackknifing penalty analyses yielded similar results. While both analyses found that “too much sweetness†as the most troublesome attribute, bootstrapping penalty analysis showed having a “too weak vanilla flavor†has an effect of 0.45 mean drop point from the product’s overall acceptability. Through empirical validation, jackknife mean drop estimates were found to be closer to the original mean drops and have smaller estimates of standard error as compared with the bootstrap mean drop estimates. Overall, bootstrapping penalty analysis was more conservative in determining critical sensory attributes. Moreover, the proposed jackknifing penalty analysis was able to depict similar results of the ordinary penalty analysis with further ability to calibrate problematic sensory attributes based on statistical evidence. This can offer a more powerful and useful evaluation tool for product development in the food industry.
Keywords: bootstrapping, jackknifing, just about right scale, 9-point hedonic score, mean drop, product development
Year: 2020 Vol.: 69 No.: 1
Record ID: 125 [ Page 2 of 8, No. 16 ]
Authors: Xavier Javines Bilon and Jose Antonio R. Clemente
Abstract:
A methodological challenge for researchers performing content analysis on social media data involves deciding on a sampling procedure for obtaining content to be analyzed with least sampling error. The study used and recommended two different kinds of elementary unit—post and day—that allow probability sampling of Facebook data, regardless of whether the sampling frame of all posts within the time period of interest is obtainable. Four sampling designs for post as elementary unit and five for day as elementary unit—including three commonly used sampling options for content analysis: simple random sampling without replacement (SRSWOR), constructed week sampling, and consecutive day sampling— were employed on Facebook data mined from Mocha Uson Blog from 2010 to 2018. Estimates for parameters, such as measures of user engagement and proportions of topic-related posts, were obtained at increasing sample sizes. Sampling designs for each elementary unit were evaluated by comparing the normalized area under the coefficient of variation curve (NAUCV) over the different sample sizes. For post as elementary unit, with content type as the stratification variable, stratified random sampling (StRS) using Neyman allocation based on total user engagement is recommended (average NAUCV = 31.28%). For day as elementary unit, SRSWOR is recommended (average NAUCV = 42.31%).
Keywords: content analysis, sampling method, social media analytics
Year: 2020 Vol.: 69 No.: 1
Record ID: 124 [ Page 2 of 8, No. 17 ]
Authors: Hernan G. Pantolla and Rechel G. Arcilla
Abstract:
Crop production, among many others, is threatened by climate change. In the Philippines, one such crop is rice. Various models have been applied to investigate factors that affect the production of this grain. Specialized software has been utilized that simulates its production under different conditions. But these were done on a larger scale. Local and international organizations call for the creation of crop production models that are specific to locations which can describe their long- and short-term responses to the changes in the environment. Hence, a model is proposed that made use of the Bounds Testing feature of the Autoregressive Distributed Lag Model (ARDL) to determine the long-run and short-run effects of selected climate change indicators to the rice yield of Central Luzon, the Rice Granary of the Philippines. The findings showed that the speed of adjustment towards equilibrium is 1.26%. Precipitation, temperature, and atmospheric carbon dioxide concentration have long-run impacts, while the lagged differences of the yield itself and temperature as well as the first difference temperature have short-run effects. Sea surface temperature anomaly was found to have no significant contribution. The fitness of the generated ARDL Model was substantiated by several diagnostic tests performed. In addition, it showed gains in forecasting accuracy when compared with a baseline model. The results of this study could be used in decision-making and policy developments.
Keywords: autoregressive distributed lag, bounds testing, climate change indicators, rice yield, Central Luzon
Year: 2020 Vol.: 69 No.: 1
Record ID: 123 [ Page 2 of 8, No. 18 ]
Authors: Michael Ralph M. Abrigo, Aniceto C. Orbeta Jr. and Alejandro N. Herrin
Abstract:
A key research question with relevance to policy is how the full implementation of the RPRH law, with attention to its three key elements, namely: comprehensive sexuality education, family planning, and maternal and child health, contributes to economic growth and poverty reduction. This study describes the various pathways in which these key elements would work their way to affecting fertility, human capital formation, and economic growth and poverty reduction. Among the pathways considered, recent studies have provided into the economic and social impact of preventing early childbearing, reducing maternal mortality, and preventing child stunting. However, a gap exists in measuring the effect of achieving couples desired fertility on economic growth and poverty reduction through its impact on the age structure, investments in human capital, and productivity. The study addresses this gap by estimating the economic gains from a full implementation of the RPRH law, with attention on the family planning component as opposed to delayed implementation or no implementation scenarios (“business as usualâ€). The results show that helping couples achieve the desired number of children can potentially have substantial economic benefits in terms of more rapid economic growth arising from the first and second demographic dividends, which in turn accelerate poverty reduction both in terms of incidence and number of population.
Keywords: RPRH law, total fertility rate, demographic dividends, poverty reduction
Year: 2020 Vol.: 69 No.: 1
Record ID: 122 [ Page 2 of 8, No. 19 ]
Authors: Edsel L. Beja Jr.
Abstract:
The paper tests the convergent validity and causality of the Consumer Expectations Survey from the Bangko Sentral ng Pilipinas and the Quarterly Social Weather Survey from the Social Weather Stations. The results indicate that there is convergent validity; and that there is bi-direction causality. Further results reveal that both share a common set of determinants. Overall, the findings imply that the Consumer Expectations Survey and the Quarterly Social Weather Survey embody comparable information. As such, one can be a proxy measure of the other. For policy, the findings support the view that a monetary approach for controlling the overall performance of the country, especially with regard to the inflation rate, in conjunction with a fiscal approach for securing the provision of basic social services are key to an effective management of sentiments and for an improvement in the quality of life.
Keywords: Consumer Expectations Survey, Quarterly Social Weather Survey, convergent validity, cointegration, causality, Toda-Yamamoto procedure
Year: 2019 Vol.: 68 No.: 2
Record ID: 121 [ Page 2 of 8, No. 20 ]
Authors: Geoffrey M. Ducanes and Dina Joan S. Ocampo
Abstract:
The study measures the impact on the school participation of 16 to 17-year-old learners in the Philippines of the implementation of the Senior High School program (SHS), which came into full effect in school year 2017–2018. The SHS program, which extended secondary education in the country from four to six years, was the most ambitious education reform action in the country in recent memory. The study found that the SHS program resulted in an increase in overall school participation rate of at least 13 percentage points among 16 to 17-year-olds. Perhaps more importantly, the increase in school participation rate was found to be highly progressive with those 16 to 17-year-olds in the two bottom income quintiles experiencing the highest increase in school participation rates by a wide margin. The study also found that both male and female students benefited from the program, although the gains appear to be higher for female students. Most of the gains in school participation were also found to occur outside Metro Manila.
Keywords: impact evaluation, logit regression, education reform, senior high school, gender in education
Year: 2019 Vol.: 68 No.: 2