Printer Friendly Version | Back

No. of records per page: 10 | 20 | 30 | 50 | 100 | Show all
Select a Page:  1 2 3 4 5 6 Next >>

Record ID: 160    [ Page 1 of 6, No. 1 ]

Modelling Portfolio Risk and Diversification Effects of a Portfolio Using the Exponential Distribution – Bivariate Archimedean Gumbel Copula Model

Authors: Owen Jakata and Delson Chikobvu

Abstract:

This study uses the Archimedean Gumbel copula model to construct the dependence structure and joint probability distributions using the Exponential Distribution as the marginal distribution to asset returns. The main objective of this study is to estimate the diversification effects of investing in a portfolio consisting of two financial assets, viz: the South African Industrial and Financial Indices. The Exponential Distribution is used as the marginal distribution of the returns, instead of the Normal distribution, to better characterise the financial returns of the two assets. The scatterplots indicate that the dependence in gains, as well as the losses are better captured using the Archimedean Gumbel copula. Monte Carlo simulation of an equally weighted portfolio of the two financial assets is used to model and quantify the risk of the resultant portfolio. The results confirm that there are benefits in diversification, since the riskiness of the portfolio is less than the sum of the risk of the two financial assets. It is less risky to invest in diversified portfolios that includes assets from the two different industries/stock markets. Due to dependence and contagion between Global stock markets, the findings of this study are useful information for the local and international investors seeking a portfolio which include developing countries’ stock market Indices containing, say the South African financial assets. This study provides investors with a framework to quantify diversification effects, which allows for the avoidance of extreme risks, whilst benefiting from extreme gains.

Keywords: Expected Shortfall. Monte Carlo simulation, Value-at-Risk

Download this article:

Year: 2023       Vol.: 72       No.: 1      


Record ID: 159    [ Page 1 of 6, No. 2 ]

Local Quadratic Regression: Maximizing Performance via a Modified PRESS** for Bandwidths Selection

Authors: E. Edionwe and O. Eguasa

Abstract:

In the application of nonparametric regression model, it is a well-established fact that the bandwidth - also called smoothing parameter- is the single most crucial parameter that determines the quality of the estimated responses that are obtained from the regression procedure, and that its choice (how small or large the size) is hugely influenced by the criterion that is applied for its selection. Under small-sample settings, which is typical of response surface studies, the penalized Prediction Error Sum of Squares (PRESS**) criterion is recommended for selecting this all-important parameter. However, for the purpose of selecting bandwidths of improved statistical properties, we propose a modified version of the PRESS** criterion specifically for Local Quadratic Regression (LQR) model. Results from simulated data as well as those from two popular problems from the literature show that LQR procedure that utilizes the bandwidths selected via the proposed modified criterion performs outstandingly better than its counterpart that utilizes bandwidths selected via PRESS** criterion.

Keywords: Desirability Function, Hat matrix, Penalized Prediction Error Sum of Squares, Response Surface Methodology

Download this article:

Year: 2023       Vol.: 72       No.: 1      


Record ID: 158    [ Page 1 of 6, No. 3 ]

Spatiotemporal Patterns of COVID-19 Cases in Quezon City, Philippines

Authors: Tricia Janylle B. Sta. Maria, Nancy E. Añez-Tandang, and Edrun R. Gayosa

Abstract:

Various studies have been undertaken to explore the spatial characteristics of the COVID-19 pandemic. However, only a few have considered the pandemic's temporal characteristics to assess space-time dynamics. This study focuses on COVID-19 spatiotemporal patterns in Quezon City, Philippines from November 2020 to October 2021. Spatial clustering and spatiotemporal patterns were analyzed based on a space-time cube (STC). Results showed that hot spots and cold spots were found in the city's northern and southern parts, respectively. Also, a significant increasing pattern was revealed throughout the study period. Moreover, STC analysis demonstrated that intensifying hot spots or locations that were statistically significant hot spots for 90% of the study period and the intensity of clustering of high counts of COVID-19 cases is significantly increasing overall, was primarily concentrated in the center and northern regions of Quezon City, where the majority of the barangays in Districts 2, 5, and 6 are located. Barangays identified with this pattern were Bagong Silangan, Batasan Hills, Commonwealth, Holy Spirit, Payatas, Matandang Balara, Pasong Putik Proper, Fairview, Pasong Tamo, and Sauyo. As there is a possible resurgence in COVID-19 cases, identifying spatiotemporal trends and clustering patterns is vital for regulating and controlling COVID-19's spread. Thus, the study's findings and methods can be utilized to predict and manage epidemics and help decision-makers control existing and future outbreaks.

Keywords: spatial clustering, spatiotemporal analysis, space-time cube, coronavirus

Download this article:

Year: 2023       Vol.: 72       No.: 1      


Record ID: 157    [ Page 1 of 6, No. 4 ]

Utilization of Machine Learning, Government-Based and Non-Conventional Indicators for Property Value Prediction in the Philippines

Authors: Gabriel Isaac L. Ramolete, Bryan Bramaskara, Dustin A. Reyes, and Adrienne Heinrich

Abstract:

Property appraisal and value estimation in the Philippines are prone to human errors and bias, due to price subjectivity and the general difficulty in properly quantifying the impact of factors beyond the property itself. Predictive models for property valuation typically involve conventional features of the house (e.g., number of bathrooms) and market prices of nearby properties. This paper investigates the value of incorporating alternative data to account for deviations in true market value and improve property value predictions in the Philippines and other developing countries. The study considers public data and anchors socio-economic indicators to assess its relevance to property value prediction in the Philippines. By utilizing the Department of Trade and Industry’s 2021 National Competitiveness Index Rating, this research also investigates the significance of a Local Government Unit’s competitiveness based on their economic dynamism, government efficiency, infrastructure, and resiliency. Different commonly used Machine Learning (ML) methods and features from various data sources are compared and it is found that the inclusion of government indicators has substantial positive effect on the model performance on top of conventional indicators that can be globally replicated. A Mean Average Percentage Error (MAPE) of 10.7-21% is obtained which is competitive compared to the performance ranges of other reported models. A property segment (personalized) approach is proposed to achieve lower error rates in Philippine appraisal (in 87.5% of cases), better access and transparency for populations outside the real estate network, and minimally biased assessments, all of which are also relevant for other developing countries.

Keywords: property appraisal, spatial analysis, city competitiveness, clustering

Download this article:

Year: 2023       Vol.: 72       No.: 1      


Record ID: 156    [ Page 1 of 6, No. 5 ]

Verification of Coffee Product Form and Determination of Conversion Rate From Coffee Dried Berries to Green Coffee Beans (GCB)

Authors: Dennis S. Mapa, Ph.D., Divina Gracia L. Del Prado, Ph.D., Vivian R. Ilarina, Rachel C. Lacsa, Manuela S. Nalugon, Abella A. Regala, Marivic C. De Luna, and Ray Francis B. De Castro

Abstract:

The Philippine Statistics Authority (PSA) collects, generates, and releases production data on coffee in the form of dried berries from the results of the Crops Production Survey. In the computation of the Supply Utilization Accounts and Food Balance Sheet for coffee, the coffee form used is green coffee beans (GCB), using a conversion rate of 28 percent from dried berries to GCB. However, the Food and Agriculture Organization of the United Nations (FAO) and International Coffee Organization (ICO) use a 50 percent conversion rate from dried berries to GCB. The common form of coffee traded by farmers and the conversion rate from dried berries to GCB were investigated through consultations with traders, processors, and other stakeholders; and surveys with coffee farmers and traders as respondents. The results of this study show that the common form of coffee traded by farmers is GCB, and the average conversion rate from dried berries to GCB is 50 percent.

Keywords: fresh berries, field visits, survey, processors, experiment, percent recovery

Download this article:

Year: 2022       Vol.: 71       No.: 2      


Record ID: 155    [ Page 1 of 6, No. 6 ]

Determination of Dry Rubber Content of Rubber Cup Lump

Authors: Claire Nova O. Abdulatip and Honey Fe G. Boje

Abstract:

The quality of latex from rubber trees is determined by the amount of Dry Rubber Content (DRC). Mainly, the price of the cup lumps is directly dependent on the DRC, commonly determined through visual observation by rubber dealers. Thus, no standard method is used by rubber buyers in the industry for farm gate determination of cup lump DRC. But the Philippines used a 25% conversion rate from cup lumps to dry rubber; however, other member countries used 50%. Therefore, this paper addresses the conversion rate of dry rubber content of rubber cup lumps in the Philippines. “On-site validation” was conducted by collecting data from selected rubber processing plants in major rubber producing provinces, namely: Zamboanga Sibugay, North Cotabato, and Bukidnon. Secondary data, such as cup lumps and crumb rubber volume from 2017 to 2019, were collected and analyzed using Percentage Ratio Comparison. Results indicated that the DRC of cup lumps to dry rubber was more than 50%.

Keywords: Rubber, Conversion Rate, On-Site Validation

Download this article:

Year: 2022       Vol.: 71       No.: 2      


Record ID: 154    [ Page 1 of 6, No. 7 ]

Topic Identification and Classification of GooglePlay Store Reviews

Authors: Daniel David M. Pamplona

Abstract:

Digital distribution platforms, such as Google®3 Play Store, contain an enormous quantity of information related to app data and user reviews. A particularly challenging task is to classify a large unstructured dataset into smaller clusters or topics. With this, data from 19,886 user reviews was extracted from Google Play Store. The main task is to determine app characteristics, though common themes, that are commonly mentioned in positive and negative reviews. Text data was preprocessed and then common topics were identified using LDA for positive reviews and negative reviews. The accuracy of topics was assessed using perplexity-based approach and human interpretation. To further validate the topic model, the topic assignment was used as the outcome variable in Naive Bayes model with reviews as input. Empirical results show that the extracted topics can be predicted well using text reviews. Finally, the distribution of topics was calculated according to different app categories.

Keywords: Topic Modeling, Latent Dirichlet Allocation, Naive Bayes Classifier, Perplexity

Download this article:

Year: 2022       Vol.: 71       No.: 2      


Record ID: 153    [ Page 1 of 6, No. 8 ]

A Bayesian Hierarchical Model for COVID-19 Cases in Mindanao Philippines

Authors: Jejemae D. Nacion and Bernadette F. Tubo

Abstract:

A Bayesian hierarchical modelling approach is utilized to nowcast COVID-19 cases in Mindanao, Philippines for the year 2020 to 2021. A spatio-temporal model is considered and the proposed methodology explores the possibility of a flexible way of correcting the time and space delayed reports of the COVID-19 cases for a duration of 4 weeks for the 27 provinces in Mindanao via a Bayesian approach. The goal of the modelling approach is to include parameters that will correct reporting delays in the dataset and derive a model using the Integrated Nested Laplace Approximation (INLA). The study shows that the proposed model was able to capture the increasing trend of the COVID-19 disease counts, that is, the prediction counts derived are closer to the true count compared to the currently reported counts of COVID-19 cases which showed a decreasing behavior. The ability of the proposed model to nowcast statistically significant estimates, particularly, for epidemic counts of COVD-19 in the presence of report delays may aid health authorities to have effective control measures and issuance of warnings to the public.

Keywords: Bayesian inference, spatio-temporal model, reporting delay, nowcasting

Download this article:

Year: 2022       Vol.: 71       No.: 2      


Record ID: 152    [ Page 1 of 6, No. 9 ]

Hierarchical Bayesian Model for Correcting Reporting Delays in Dengue Counts

Authors: Mikee T. Demecillo and Bernadette F. Tubo

Abstract:

Real-time surveillance and precise case estimation are necessary for situational awareness in order to spot trends and outbreaks and establish efficient control actions. The comprehension of the mechanisms of a sudden rise or fall in disease cases that change over time is hampered by the reporting delays between disease start and case reporting. This study uses a flexible temporal nowcasting model with a Bayesian inference for latent Gaussian models built in R-INLA to rectify reporting delays for weekly dengue surveillance data in Northern Mindanao from 2009 to 2010. Additionally, it seeks to quantify all the uncertainties involved in replacing the missing value. The statistical issue is to forecast run-off triangle numbers based on actual counts ????????!,#. In contrast to the currently reported instances, which seem to be declining, the posterior predictive model on thegiven temporal dataset recognizes the fact that there are more dengue cases than there were previously (supporting the actual scenario). This implies that even with delayed data, the model was still able to provide a reliable estimate of the true number of instances. This paper offers a model for nowcasting to aid in dengue control and good judgment on the part of interested authorities.

Keywords: Latent Gaussian Model, Nowcast, Count Data

Download this article:

Year: 2022       Vol.: 71       No.: 2      


Record ID: 151    [ Page 1 of 6, No. 10 ]

Estimating the Magnitude of the Poor Households in Metro Manila Using the Poisson Regression Model

Authors: Bernadette B. Balamban, Anna Jean C. Pascasio, Driesch Lucien R. Cortel and Maxine R. Ridulme

Abstract:

In the Official Poverty Statistics, Metro Manila, also known as the National Capital Region (NCR), is one of the areas that belong to the least poor cluster – a cluster that has relatively low poverty incidences. The Philippine Statistics Authority released the 2018 Municipal and City Level Poverty Estimates using the Elbers, Lajouw, and Lanjouw (ELL) methodology. The 2018 Small Area Estimates (SAE) of Poverty also released estimates for the 14 sub-areas in Metro Manila, ranging from 1.5 percent to 6.5 percent. The city-level poverty estimates were released as official statistics using direct estimation technique. Given the relatively low poverty incidences of the region, this paper aims to estimate the 2018 poverty incidence for the legislative districts in NCR, including the 14 sub-areas of the City of Manila, using the Poisson regression methodology and to compare with the results of the ELL methodology. Data sources include the 2015 Census of Population, and the merged 2018 Family Income and Expenditure Survey and January 2019 round of the Labor Force Survey. A total of 5 significant indicators were included in the final model. Results show that the Poisson model produced more reliable estimates for NCR than the ELL methodology. These SAE techniques allow for generating more granular poverty statistics useful for targeting poor beneficiaries. Furthermore, information may provide an opportunity for the LGU to act swiftly and provide appropriate subsidies for areas within Metro Manila.

Keywords: poverty statistics, small area estimation, survey and census data, time invariance

Download this article:

Year: 2022       Vol.: 71       No.: 2      


Record ID: 150    [ Page 1 of 6, No. 11 ]

Application of Consecutive Sampling Technique in a Clinical Survey for an Ordered Population: Does it Generate Accurate Statistics?

Authors: Mohamad Adam Bujang, Tg Mohd Ikhwan Tg Abu Bakar Sidik, and Nadiah Sa'at

Abstract:

This study aims to compare the statistical generalizations which are inferred from samples obtained by both systematic sampling and consecutive sampling and then compare both their results with the true population parameters of the target population. This study was conducted using two approaches. The first approach was a comparison between sample statistics and population parameters based on a simulation analysis to estimate the population parameters from three types of statistical distributions (i.e. Normal, Exponential, and Poisson) by using seven sub-samples and 1000 iterations. The second approach was a comparison between sample statistics and population parameters based on real-life data sets which comprise six sub-samples and four parameters. Based on results from the simulation analysis, systematic sampling offers a greater advantage by having a smaller value of mean square error (MSE) in 40 out of 70 comparisons (57.1%) while consecutive sampling has a smaller value of MSE in 29 out of 70 comparisons (41.4%). There is only one MSE comparison that was identical between systematic sampling and consecutive sampling. Based on a validation approach, systematic sampling produced more accurate statistics than consecutive sampling with six out of eight comparisons. In summary, systematic sampling offers a better advantage in terms of accuracy. However, consecutive sampling is still able to generate valid and accurate statistics despite the fact that it is a type of non-probability sampling, especially if a sufficiently large sample size has been obtained for statistical analysis. Therefore, it is recommended that in any situation when it can be difficult to apply a systematic sampling technique for a particular clinical setting, researchers may opt to apply the consecutive technique in the recruitment process as an alternative, with a limitation on making generalizations about the target population.

Keywords: population parameters; sample statistics; systematic sampling.

Download this article:

Year: 2022       Vol.: 71       No.: 1      


Record ID: 149    [ Page 1 of 6, No. 12 ]

On Some Efficient Classes of Estimators Based on Higher Order Moments of an Auxiliary Attribute

Authors: Shashi Bhushan and Anoop Kumar

Abstract:

This paper discusses the problem of estimating the population mean utilizing information on the mean and variance of qualitative characteristics. We introduce some efficient classes of estimators based on higher order moments such as the variance of an auxiliary attribute. The conventional mean estimator, Bhushan and Gupta (2016) estimator, and the traditional regression and ratio estimators proposed by Naik and Gupta (1996) are shown to be the sub-class of the proposed estimators for properly chosen valuations of the described scalars. The effective performance of the suggested estimators has been assessed empirically and theoretically with respect to the contemporary estimators.

Keywords: mean square error, efficiency, qualitative characteristics

Download this article:

Year: 2022       Vol.: 71       No.: 1      


Record ID: 148    [ Page 1 of 6, No. 13 ]

An Application of CATANOVA and Logistic Regression on the Most Prevalent Sexually Transmitted Infection (A Case Study of the University of Nigeria Teaching Hospital)

Authors: Nnaemeka Martin Eze, Oluchukwu Chukwuemeka Asogwa, Samson Offorma Ugwu, Chinonso Michael Eze, Felix Obi Ohanuba, Tobias Ejiofor Ugah

Abstract:

This research focused on the application of CATANOVA and logistic regression on the most prevalent Sexually Transmitted Infection (STI) reported in the University of Nigeria Teaching Hospital from 2010- 2020. A population of 20,704 patients was recorded to have contracted eight(8) selected STIs. Prevalence analysis was computed to determine the most prevalent STI. Two-way CATANOVA cross-classification was computed to ascertain the age group and gender that suffer more from the most prevalent STI. Three-way CATANOVA was computed to ascertain the association among drug prescription, age, and gender of the Gonorrhea patients. A logistic regression model was fitted to predict infertility as an effect of the most prevalent STI. The prevalence analysis showed Gonorrhea infection as the most prevalent STI at 33.08%. A population of 6,850 patients recorded to have contracted Gonorrhea infection from 2010-2020 was employed for the analysis. Two-way CATANOVA cross-classification showed that gender, age, and interaction effects were statistically significant at a 5% significance level. Male (3,752; 54.8%) suffers Gonorrhea infection more than female (3,098;45.2%) and aged 30-39 years (1,946; 28.4%) suffers it more than any other age interval. The interaction effect shows that the rate of contracting Gonorrhea infection by gender differs from one age interval to another. Three-way CATANOVA results showed that drugs prescribed for the treatment of Gonorrhea infection depend on gender and age. Logistic regression results showed that an increase in age, body mass index, blood pressure, blood sugar, bacteria quantity, and Gonorrhea history were associated with an increased likelihood of the Gonorrhea patient being infertile.

Keywords: Chi-square test, Prediction, Prevalence

Download this article:

Year: 2022       Vol.: 71       No.: 1      


Record ID: 147    [ Page 1 of 6, No. 14 ]

Analytic Hierarchy Process with Rasch Measurement in the Construction of a Composite Metric of Student Online Learning Readiness Scale

Authors: Joyce DL. Grajo, James Roldan S. Reyes1, Liza N. Comia, Lara Paul A. Ebal, Jared Jorim O. Mendoza, and Mara Sherlin DP. Talento

Abstract:

This paper developed the Online Learning Readiness Composite Scale (OLRCS), a composite measure of student online learning readiness based on five dimensions, namely (1) computer/internet self-efficacy; (2) self-directed learning; (3) learner control; (4) motivation for learning; and (5) online communication self-efficacy. A single metric of online learning readiness has its advantage over its disaggregated dimensions. For one, it allows a summative description of each student which school administrators can use for an effective student targeting toward flexible learning. Rasch Analysis (RA) was performed to come up with an objective measure for each dimension while Analytic Hierarchy Process (AHP) was applied to aggregate the computed Rasch scores of the five dimensions. Three OLRCS have been constructed using weights generated by (1) teacher participants, (2) student participants, and (3) combined student and teacher participants. Results showed that motivation for learning consistently received the highest weight while online communication self-efficacy and computer/internet selfefficacy got low weights among the three OLRCS. Research findings also showed that student participants gave more importance to learner control than self-directed learning, unlike the teacher participants. The difference in the teacher and student perspectives merits detailed attention to optimize the online learning environment and enable individual support. Nevertheless, using cluster analysis, the distribution of students who are ready, undecided, or not ready for online learning is similar to the three constructed OLRCS.

Keywords: multidimensional latent variable; multi-criteria decision analysis; linear aggregation

Download this article:

Year: 2022       Vol.: 71       No.: 1      


Record ID: 146    [ Page 1 of 6, No. 15 ]

Implementing an Effective Survey Operations for a Research and Development Survey in the Philippines

Authors: Ramoncito G. Cambel, Dalisay S. Maligalig, Maurice C. Borromeo, and Ronald R. Roldan Jr., and Clifford B. Lesmoras

Abstract:

Studies have shown that research and development (R&D) is a good driver of economic growth. Policies and programs that are based on good quality data are expected to produce better results. Hence, to formulate and implement policies and programs in R&D, good quality data is vital. A good data support system is also essential in identifying critical areas that need intervention, formulating viable approaches in addressing these issues, and allocating limited resources. In the Philippines, the Department of Science and Technology (DOST) has been conducting the Survey on Research and Development Expenditures and Personnel (R&D Survey) since 2003 so that R&D data and indicators can be compiled. To ensure that good quality R&D data and indicators are achieved, the DOST granted a research fund to the Institute of Statistics (INSTAT) of the University of the Philippines Los Baños (UPLB) in 2018 to further improve the design, conduct and analysis of the R&D Survey. This paper describes the processes that were developed and implemented through this research grant in relation to the dimensions of data quality, namely, relevance, accuracy, timeliness, accessibility, coherence, and comparability. Based on the evaluation of these processes, the paper also recommends further improvement on the survey operations of future rounds of the R&D Survey.

Keywords: Survey on Research and Development Expenditures and Personnel, data quality

Download this article:

Year: 2022       Vol.: 71       No.: 1      


Record ID: 145    [ Page 1 of 6, No. 16 ]

Analysis of Longitudinal Data with Missing Values in the Response and Covariates Using the Stochastic EM Algorithm

Authors: Ahmed M. Gad and Nesma M. Darwish

Abstract:

Longitudinal data are not uncommon in many disciplines where repeated measurements on a response variable are collected for each subject. Missing values are unavoidable in longitudinal studies. Missing values could be in the response variable, the covariates or in both. Dropout pattern occurs when some subjects leave the study prematurely. When the probability of missingness depends on the missing value, and may be on the observed values, the missing data mechanism is termed as non-random. Ignoring the missing values in this case leads to biased inferences. In this paper we will handle missing values in covariates using multiple imputations (MI) and the selection model to fit longitudinal data in the presence of nonrandom dropout. The stochastic EM (Expectation-Maximization) algorithm is developed to obtain the model parameter estimates. Also, parameter estimates of the dropout model have been obtained. Standard errors of estimates have been calculated using the developed Monte Carlo method. The proposed approach performance is evaluated through a simulation study. Also, the proposed approach is applied to a real data set.

Keywords: Interstitial Cystitis data; missing covariates; dropout missingness; multiple imputation; selection model; the SEM algorithm.

Download this article:

Year: 2022       Vol.: 71       No.: 1      


Record ID: 144    [ Page 1 of 6, No. 17 ]

Two New Tests for Tail Independence in Extreme Value Models

Authors: Mohammad Bolbolian Ghalibaf

Abstract:

This paper proposes two new tests for tail independence in extreme value models. We use the conditional distribution function (df) of X + Y, given that X + Y > c based approach of Falk and Michel to test for tail independence in extreme value models. We recommend using Cramervon Mises and Anderson-Darling tests for tail independence. Simulations show that the two tests are better than the Kolmogorov-Smirnov test which has good results among the proposed tests by Falk and Michel. Finally, by using two real datasets, we illustrate the application of the two proposed tests as well as the traditional tests of Falk and Michel.

Keywords: extreme value model, tail independence, Copula function, Cramer-von Mises test, Anderson-Darling test, Neyman- Pearson test, Kolmogorov-Smirnov test, Fisher’s ? test, Chi-square goodness-of-fit test

Download this article:

Year: 2021       Vol.: 70       No.: 2      


Record ID: 143    [ Page 1 of 6, No. 18 ]

Time Series Prediction of CO2 Emissions in Saudi Arabia Using ARIMA, GM(1,1), and NGBM(1,1) Models

Authors: Z. F. Althobaiti, and A. Shabri

Abstract:

The investigation of economic aspects of gas emissions in terms of its volume and consequences is very important, given the current increasing trend. Therefore, the prediction of carbon dioxide emissions in Saudi Arabia becomes necessary. This study uses annual time series data on CO2 emissions in Saudi Arabia from 1970 to 2016. The study built the prediction model of CO2 emissions in Saudi Arabia by using Autoregressive Integrated Moving Average (ARIMA), Grey System GM and Nonlinear Grey Bernoulli Model (NGBM), and comparing their efficiency and accuracy based on MAPE metric. The results revealed that Nonlinear Grey Bernoulli Model (NGBM) is more accurate than the other prediction models. The results may be useful to Saudi Arabian government in the development of its future economic policies. As a result, five policy recommendations have been proposed, each of which could play a significant role in the development of acceptable Saudi Arabian climate policies.

Keywords: annual time series data, Autoregressive Integrated Moving Average (ARIMA), CO2 emissions, global warming, Grey Model (GM), Nonlinear Grey Bernoulli Model (NGBM), prediction, Saudi Arabia

Download this article:

Year: 2021       Vol.: 70       No.: 2      


Record ID: 142    [ Page 1 of 6, No. 19 ]

Classes of Estimators under New Calibration Schemes using Non-conventional Measures of Dispersion

Authors: A. Audu, R. Singh, S. Khare, N. S. Dauran

Abstract:

In this paper, we proposed two classes of estimators under two new calibration schemes for a heterogeneous population by incorporating auxiliary information of Non-Conventional Measures of dispersion which are robust against the presence of outlier in the data.Theoretical results are supported by simulation studies conducted on six bivariate populations generated using exponential and normal distributions. The biases and percentage relative efficiencies (PRE) of the proposed and other related estimators in the study were computed and results indicated that the estimators proposed under suggested calibration schemes perform on average more efficiently than conventional unbiased estimator, Rao and Khan (2016) and Nidhi et al. (2017).

Keywords: heterogeneous population, Outliers, Estimators, Robust measures, Population mean

Download this article:

Year: 2021       Vol.: 70       No.: 2      


Record ID: 141    [ Page 1 of 6, No. 20 ]

A New Compound Probability Model Applicable to Count Data

Authors: Showkat Ahmad Dar, Anwar Hassan, Peer Bilal Ahmad and Bilal Ahmad Para

Abstract:

In this paper, we obtained a new model for count data by compounding of Poisson distribution with two parameter Pranav distribution. Important mathematical and statistical properties of the distribution have been derived and discussed. Then, parameter estimation is discussed using maximum likelihood method of estimation. Finally, real data set is analyzed to investigate the suitability of the proposed distribution in modeling count data.

Keywords: Poisson distribution, two parameter Pranav distribution, compound distribution, count data, simulation study, maximum likelihood estimation

Download this article:

Year: 2021       Vol.: 70       No.: 2      


Record ID: 140    [ Page 1 of 6, No. 21 ]

A Modified Ridge Estimator for the Logistic Regression Model

Authors: Mazin M. Alanaz, Nada Nazar Alobaidi and Zakariya Yahya Algamal

Abstract:

The ridge estimator has been consistently demonstrated to be an attractive shrinkage method to reduce the effects of multicollinearity. The logistic regression model is a well-known model in application when the response variable is binary data. However, it is known that multicollinearity negatively affects the variance of maximum likelihood estimator of the logistic regression coefficients. To address this problem, a logistic ridge regression model has been proposed by numerous researchers. In this paper, a modified logistic ridge estimator (MLRE) is proposed and derived. The idea behind the MLRE is to get diagonal matrix with small values of diagonal elements that leading to decrease the shrinkage parameter and, therefore, the resultant estimator can be better with small amount of bias. Our Monte Carlo simulation results suggest that the MLRE estimator can bring significant improvement relative to other existing estimators.

Keywords: multicollinearity, ridge estimator, logistic regression model, shrinkage, Monte Carlo simulation

Download this article:

Year: 2021       Vol.: 70       No.: 2      


Record ID: 139    [ Page 1 of 6, No. 22 ]

Modelling the Right-Tail Conditional Expectation and Variance of Various Philippine Stocks Return using the Class of Beta Generalized Pareto Distribution

Authors: Angelo E. Marasigan

Abstract:

A risk measure such as the value-at-risk (VaR) is commonly used by financial institution for capital management and calculation of amount of risk exposure against a loss. The incoherence of VaR leads to the calculation of conditional tail expectation (CTE) for remedy. In this study, formulas for the CTE and the conditional tail variance (CTV) under the class of beta generalized Pareto (bgP) distribution were derived. bgP is used to model the distribution of different scenarios of return of various Philippine stock indices using maximum likelihood estimation due to simulated annealing method with R software, and further used to compute the CTE and CTV of the data of returns according to the specified model. To determine the performance of bgP to model financial data sets, a comparison with its generating distributions which are the (generalized) beta and Pareto models were done. Finally, the method of historical simulation was also done and used to compute the corresponding VaR, CTE, and CTV for comparison to the above method of calculations.

Keywords: risk measure, Beta generalized Pareto distribution, value-atrisk, conditional tail expectation, conditional tail variance, heavy-tailed

Download this article:

Year: 2021       Vol.: 70       No.: 1      


Record ID: 138    [ Page 1 of 6, No. 23 ]

Development of an Alternative Municipal and City Level Competitiveness Index in the Philippines

Authors: Ramoncito G. Cambel and Zita VJ. Albacea

Abstract:

The Philippine Cities and Municipalities Competitiveness Index (CMCI) by the National Competitiveness Council (NCC) measures the local government unit’s competitiveness. This paper presents the results of a study that employed an alternative weighting method for the indicators through a statistical technique called principal component analysis and factor analysis. Moreover, the study utilized both national census data and administrative data from the government. Four factors were extracted from the available data. The four factors’ compositions are: “Housing and Household Characteristics, Financial Institutions and Capacity of Government for Health Services,” “Establishments for Tourists Accommodation,” Cost of Labor”, and “Capacity of Local Government to Deliver Services.” Unlike the distribution of the NCC’s 2016 competitiveness level of municipalities and cities which is symmetric, the distribution of the proposed alternative competitiveness index is positively skewed. This suggests that only a few municipalities and cities can be considered highly competitive. Moran’s I of 0.3457 proves that there exists positive spatial autocorrelation on the competitiveness level of Philippine municipalities and cities. Municipalities and cities in the Provinces of Bulacan, Pampanga, Cavite, Laguna, Rizal, and Cebu, and also in the National Capital Region are generally identified as those in the highhigh cluster in terms of the proposed alternative 2016 competitiveness level. In contrast, municipalities and cities in the Provinces of Mindoro, Romblon, and Surigao del Norte, and those in the Cordillera Autonomous Region, Ilocos Region, Cagayan Valley Region, and Eastern Visayas Region are generally considered as those in the low-low cluster in terms of the proposed alternative 2016 competitiveness level. Statistical properties of the index such as consistency, accuracy, and precision were then assessed using the bootstrap resampling technique. Results showed that the proposed alternative CMCI is said to be unbiased and mean square error consistent. This indicates the proposed CMCI could be used as an alternative Municipality and City Level Competitiveness Index for the Philippines. This index concentrated on four factors with 33 indicators. These factors are weighted in the index based on their importance in their contribution to the variability of the index.

Keywords: statistical index, spatial analysis, factor analysis

Download this article:

Year: 2021       Vol.: 70       No.: 1      


Record ID: 137    [ Page 1 of 6, No. 24 ]

Time Series Approach for Modelling the Merger and Acquisition Series: An Application to Indian Banking System

Authors: Varun Agiwal and Jitendra Kumar

Abstract:

In time series, present observation not only depend upon own past observation(s) but also involve other explanatory or exogenous variables. These variables are not continuously influence or impact for long run and may be removed or discontinued or merger and acquisition (M&A) because its effect may be reduced due to less significant correlation. The M&A theory is developed when one or more variables are not meet out the required circumstance to survive in the system. To analyze the performance of M&A concept, this study proposes a merged autoregressive (M-AR) model for examining the impact of merger into the parameters as well as acquired series. Bayesian approach is considered for parameter estimation under different loss functions and compared with least square estimator. To test the presence/association of merger series in the acquire series, Bayes factor, full Bayesian significance test and posterior probability based on credible interval are derived. A simulation study and an empirical application of banking indicators for Indian Banks are carried out to evaluate the performance of the proposed model. The study concludes that proposed time series models solved the problems of discontinuity in the series and also able to manage model statistically.

Keywords: autoregressive model, Bayesian inference, merger and acquisition series, Indian bank

Download this article:

Year: 2021       Vol.: 70       No.: 1      


Record ID: 136    [ Page 1 of 6, No. 25 ]

Application of U-statistics in Estimation of Scale Parameter of Bilal Distribution

Authors: R. Maya, M.R. Irshad, and S.P. Arun

Abstract:

In this work, we have considered a new lifetime distribution, the Bilal distribution and derived the best linear unbiased estimator (BLUE) of the scale parameter of the Bilal distribution based on order statistics. We further estimate the scale parameter of Bilal distribution by U-statistics based on best linear functions of order statistics as kernels. The efficiency of the proposed estimator relative to the minimum variance unbiased estimator (MVUE) have also been evaluated.

Keywords: Order statistics, Bilal distribution, best linear unbiased estimator, U-statistics

Download this article:

Year: 2021       Vol.: 70       No.: 1      


Record ID: 135    [ Page 1 of 6, No. 26 ]

Analysis of Randomized Clinical Trial in the Presence of Non-Compliance: Comparison of Causal Models

Authors: Ali Reza Soltanian, Hassan Ahmadinia, and Ghodratollah Roshanaei

Abstract:

Non-compliance is a common deviation from randomized clinical trials protocol. Standard approaches for comparing the effects of drugs in randomized clinical trials in the presence of non-compliance are intention-to-treat, as-treated and per-protocol analysis. Each of these approaches has disadvantages when evaluating the effect of medication in present of non-compliance. The current study compared the accuracy of instrumental variable (IV), intention-to-treat, as-treated and perprotocol technique. We assumed that non-compliance occurred for some patients in the new treatment group only, and independent of the patient outcomes. To compare these techniques, various scenarios were simulated. The MSE value for both PP and IV models changes only under the influence of the value of w (non-compliance ratio). That is, at all values of θ (treatment effect), the MSE of these two models increases with increasing non-compliance ratio, and changing the value of θ does not affect the MSE. The MSE value for the AT model if the non-compliance occurs only in the intervention group, this value changes only under the influence of the w value That is, in this case, in all values of θ, the MSE value increases with increasing non-compliance ratio, and changing the value of θ does not affect the MSE. But the MSE value of the ITT model is strongly influenced by the value of θ. At low θ values the MSE value of this model is lower than other methods and better estimates the therapeutic effect, and in this case with increasing the w, the MSE value increases very little. But as the θ increases, so does the MSE value, and in this case, as the w increases, the MSE value increases sharply.

Keywords: causal model, non-compliance, randomized clinical trials, simulation

Download this article:

Year: 2021       Vol.: 70       No.: 1      


Record ID: 134    [ Page 1 of 6, No. 27 ]

An Improved Class of Estimators of Population Mean under Simple Random Sampling

Authors: Shashi Bhushan, Anoop Kumar, and Saurabh Singh and Sumit Kumar

Abstract:

In this article, we consider an improved class of estimators of population mean using additional information under simple random sampling (SRS). The expressions of bias and mean square error of the proposed class of estimators are obtained up to first order of approximation. In addition, some well-known estimators have been identified as particular member of the proposed class of estimators. The theoretical results are established and empirical study has been carried out using real and simulated data sets. The findings appear to be rather satisfactory showing better improvement over the existing estimators.

Keywords: simple random sampling, mean square error, efficiency

Download this article:

Year: 2021       Vol.: 70       No.: 1      


Record ID: 133    [ Page 1 of 6, No. 28 ]

A Procedure for the Generation of Small Area Estimates of Philippine Poverty Incidence

Authors: Nelda A. Nacion and Arturo Y. Pacificador

Abstract:

The main purpose of this study is to propose an alternative procedure in the generation of small area estimates of poverty incidence using imputation-like procedures coupled with a calibration of estimates to ensure coherence in the regional estimates. Specifically, this study applied Deterministic Regression Approach, Stochastic Imputation-like procedure similar to Stochastic Regression, and applied the calibration techniques to ensure that the small area estimates conform to the known regional estimates. The difference of this methodology as compared to the ELL is that the error terms used for predictions is based on the empirical distribution of such residuals and thereby a protection against misspecification of the error model. At the same time, the procedure is simpler with available computing resources. In addition, the proposed methodology only utilized data from the census short form which is a 100% percent sample. Thus, eliminating another source of variation as compared to using the census long-form which is collected from a sample of households. This study used the Family Income and Expenditure Survey (FIES) of 2009 and the Census of Population and Housing (CPH form 2) 2010 to come up with reliable estimates of poverty incidence by municipal level. Since the CPH is conducted in the Philippines every 10 years, the CPH 2010 is the latest data that was used. The researcher was able to produce small area estimates of poverty in the Philippines at municipal level by combining survey data with auxiliary data derived from census. The study fitted different models for each region. Considering the results of this paper, the following conclusions were derived: The Stochastic Regression Imputation (SRI) is better to use as compared to Deterministic Regression Imputation (DRI) in attaching income to CPH. The SRI was able to preserve the distribution of 82% of the total number of regions. The DRI was able to preserve only 10.22% of the validation sets. Since the error in fitting the DRI in CPH does not follow a well-known distribution (such as the Normal distribution), the non-parametric way of estimating error was used to generate the errors attached in SRI. The technique is called Kernel Density Estimation (KDE) or the histogram method, which was found to be effective in using the SR. Using the calibration technique achieved municipal estimates that conforms to the regional estimates. The estimates of the poor households in CPH reflects the bottom 30% of the wealth index.

Keywords:

Download this article:

Year: 2021       Vol.: 70       No.: 1      


Record ID: 132    [ Page 1 of 6, No. 29 ]

Using Uncertainty and Sobol’ Sensitivity Analysis Techniques in the Evaluation of a Composite Provincial Level Food Security Index

Authors: Christian P. Umali and Felino P. Lansigan

Abstract:

A composite provincial level food security index can measure food security at the sub-national level which can be helpful in policy making. However, it can be non-robust due to the uncertainties involved in the choice of input factors to be used in its construction, namely: different sources of data, normalization methods, weighting schemes, aggregation systems, and the level of importance placed on the different dimensions of food security i.e. availability, accessibility, utilization, and stability. In this study, uncertainty analysis technique was employed in order to assess the robustness of the index constructed through the conventional approach. Sensitivity analysis was done in order to quantify the influence each factor has in the index building process, identify factors that should be prioritized and can be fixed in the successive development of the index, and determine which levels of factors are responsible for producing the desirable model outcomes. The study had been exploratory in considering the ratio with mean normalization technique and a combination of the additive and geometric aggregation methods as potential inputs in the index construction. In computing for the Sobol’ sensitivity indices, the formulas suggested by Jansen et al. (1994) and Nossent and Bauwens (2012) were investigated and compared under varying sample sizes. The results can provide a more appropriate choice of procedure in computing for the Sobol’ sensitivity indices at an optimum sample size, and likewise insights in the future development of a uniform and more defensible composite provincial level food security index.

Keywords: uncertainty analysis, sensitivity analysis, Sobol’ sensitivity index, Monte Carlo integral, composite index, food security

Download this article:

Year: 2020       Vol.: 69       No.: 2      


Record ID: 131    [ Page 1 of 6, No. 30 ]

Investigation of Factors Contributing to Indigenous Language Decline in Nigeria

Authors: N. A. Ikoba and E. T. Jolayemi

Abstract:

The declining fortunes of some of Nigeria’s indigenous languages are examined in this paper. A multi-dimensional indigenous language-use questionnaire was constructed to elicit data through a survey carried out in some Nigerian cities. The aim was to acquire relevant data on indigenous language ability and possible causes of language-use decline. The results from the survey showed that there is a low level of indigenous language literacy among most of the languages surveyed. The proportion of language use at home was also seen to be generally low for most of the surveyed languages, below the 70% threshold for virile languages. Several reasons were adduced for the non-transfer of indigenous language ability from parent to children and tests of statistical independence carried out showed that the respondents’ perception that their language is inferior to English, belief that the child will be limited in school, negligence and inability of parents to speak their heritage language were the major reasons adduced for the decline. A logistic regression analysis of the data also showed that acquisition of language literacy depended on a person’s place of childhood, age, level of education, frequency of use at home and the indigenous language spoken by the person’s mother.

Keywords: indigenous languages, language literacy, test of independence, intergenerational transmission, logistic regression

Download this article:

Year: 2020       Vol.: 69       No.: 2      


Back to top