Science topics: StatisticsTime Series
Science topic

Time Series - Science topic

Explore the latest questions and answers in Time Series, and find Time Series experts.
Questions related to Time Series
  • asked a question related to Time Series
Question
4 answers
Hi everyone, I would appreciate to guide me on this?
I have the study design as a serial cross-sectional study with ten years of data, the goal is: to Identify changes in the workload of activities performed by providers over the past 10 years.
The workload or amount of work encompasses nature of activities like lab test and etc or number of patients.
I am not sure can I apply time series? because it is "repeated cross sectional, so there may be different sample each year, how to do this analyze? time series is correct?
Thanks
Relevant answer
Answer
Sorry Benazir Mahar, I did the wrong answer to you, i was confused with another question, Thanks for your answer
  • asked a question related to Time Series
Question
1 answer
1.1. Background of the Study
Financial development plays a crucial role in driving economic growth by facilitating various functions such as financial intermediation, reducing transaction costs, and enabling diversification. It encompasses the effective mobilization of domestic savings for productive investments, which is particularly significant for developing nations in alleviating poverty and fostering economic progress (Levine, 2005; Ellahi, 2011). The development of the financial system is vital for the accumulation of capital, efficient allocation of resources, and technological advancements, all of which are fundamental ingredients for sustained economic growth (Nkoro & Uko, 2013).
The relationship between financial development and economic growth has been a subject of theoretical and empirical analysis. Two opposing theories, namely the supply-leading theory and the demand-following theory, present divergent perspectives on the causal link between financial development and economic growth. The supply-leading theory posits that financial development precedes economic development, as the financial sector supplies the necessary financing for productive investments. In contrast, the demand-following theory argues that economic expansion drives the development of the financial sector, as financing and credit are derived from the demands of the economy (Malarvizhi et al., 2019).
Moreover, the impact of money supply on economic growth is another crucial aspect of financial development. Expansionary monetary policies leading to an increase in the money supply can result in lower interest rates, increased lending and investment, and ultimately, higher gross output and economic growth (Arfanuzzaman, 2014). In this context, the measurement of financial depth, particularly through broad money (M2), becomes significant as it includes the components of narrow money and reflects changes in the overall money supply.
The financial crisis of 2008/2009 demonstrated the critical role of the financial system in the real economy. The United Kingdom, as one of the most highly developed financial systems globally, experienced severe repercussions from the crisis, highlighting the interconnectedness of financial activities with economic performance. The growth of the UK's banking sector and its contribution to the country's gross value added and employment further underscore the importance of a robust financial system for economic vitality (Tyler, 2015; World Bank, 2012).
Additionally, attracting foreign direct investment (FDI) has been a significant policy goal for many developing countries. FDI brings potential benefits such as productivity gains, technology transfers, managerial skills, and access to markets. Identifying factors that impede or induce foreign capital flows into host countries is crucial for policymakers seeking to leverage FDI for economic growth (Aitken & Harrison, 1999; World Bank, 1997a, b).
Cameroon harbors the Bank of Central African States (BEAC), which is the central bank of all the member states of the Economic Community of Central Africa States (CEMAC) to which, Commercial banks, postal banks (CAMPOST), insurance companies, non-banking financial institutions are under the supervision of this central bank (Puatwoe, J. T., & Piabuo, S. M. 2017). The banking sector plays a major role in the financial sector of Cameroon; it accounted for about 84.4% of the total assets of the financial sector in 2005, and contributed 19.6% to GDP, which is still in infancy operates with very limited amount of financial instruments and constitutes mostly of banks as the main arm, with an underdeveloped financial marke. (Puatwoe, J. T., & Piabuo, S. M. 2017).
In Nigeria, the financial sector has grown steadily in recent times, albeit, the socio-economic peculiarities of the country, occasioned by weak institutional quality, poor governance, corruption and insurgency in some parts of the country, among others (Akintola, A. A., Oji-Okoro, I., & Itodo, I. A. 2020).
Also the link between finance and economic growth discussed by many Scholars in Africa in different times. For example Chandang Kurarathe (2001) conclude financial sector development, total private credit extension to GDP and value added ratio were used as a proxy for it, has positive direct effect on per capita GDP or improved financial intermediation and increased liquidity promotes economic growth in South Africa. In the same manner Torroam J.tabor and Chiang(2013) by using stoke of money supply, domestic credits, foreign real credit, inflation and exchange rate as a proxy to financial deepening and applied co- integration and error correction model for the period 1990- 2011 in Nigeria. They conclude the financial sector development has essential role in Nigerian economy.
Murcy et al. (2015) examine the relationship between financial development and economic growth in Kenya using annual time series data. They employed autoregressive distributed lag (ARDL) so as to accommodate small sample data series and to address the problem of endogeneity and found that financial development has a positive and statistically significant effect on economic growth in Kenya in long-run and short-run hence confirmed supply leading hypothesis.Furthermore, Odhiambo (2008) investigated the causality between finance and economic growth in Kenya during 1969-2005 periods. It employed the dynamic multivariate Granger causality test and error correction model. He found that there was only one way causality from economic to finance. The finding indicated that finance act minor role in contribution to economic growth.
Prior to the 1991 reform period, Ethiopia's financial system was governed by the central government, just like that of many other developing nations. In particular, all private banks were nationalized from 1974 until 1991, the duration of the socialist Derg administration. The two government-owned banks, Development Bank of Ethiopia (DBE) and Commercial Bank of Ethiopia (CBE), were the dominating banks during this time (Alemayehu, 2006).
During the years 1981 to 1990, the Commercial Bank of Ethiopia (CBE) was the leading loan provider, sharing 50% (percent) of the total credit, followed by the Development Bank of Ethiopia (DBE) at 40%. The Ethiopian Construction Bank, on the other hand, only covered 10% of the whole credit financial service. The banking and insurance industries were opened to private sector participation by Proclamation No. 84/1994. The declaration signaled the start of a new era in Ethiopia's financial sector, although restricting it to citizens of Ethiopia alone. Private banking and insurance firms proliferated throughout the nation after this declaration (Alemayehu, 2006). Now financial sector consists of about 31 microfinance institutions, 18 insurance firms, and 30 banks with 5311 branches (NBE, 2022/23).
It is evident that both private and public credit has increased throughout the recent period in the country but literature on the relationship and impact of financial depth and Economic growth in Ethiopia is very scant. Therefore, The purpose of this study is to examine the relationship between financial development and economic growth in Ethiopia. Financial development is a multidimensional concept that encompasses the establishment of efficient financial institutions, the deepening of financial markets, and the expansion of financial services. It is widely recognized as a crucial driver of economic growth in both developed and developing countries.
The study aims to investigate the causal relationship between financial depth, measured by indicators such as the size of the banking sector, stock market capitalization, and credit to the private sector, and economic growth in Ethiopia. By analyzing this relationship, the study seeks to provide insights into the specific mechanisms through which financial development influences economic growth in the Ethiopian context.
To conduct this study, a comprehensive dataset covering the relevant financial and economic variables will be collected for the period from 1991 to 2023. The data will include indicators of financial development, such as the ratio of bank credit to GDP, the number of bank branches, and the stock market turnover. Economic growth will be measured by real GDP growth rates.
The study will employ econometric techniques, such as panel data analysis, to estimate the causal relationship between financial development and economic growth. Controlling for other factors that can influence economic growth, such as human capital, infrastructure, and institutional quality, the study will assess the specific impact of financial depth on economic growth in Ethiopia.
The findings of this study are expected to provide valuable insights for policymakers, financial regulators, and other stakeholders in Ethiopia. Understanding the relationship between financial development and economic growth will help inform policy decisions aimed at promoting sustainable and inclusive economic growth in the country. Additionally, the study will contribute to the existing literature on the subject by providing empirical evidence from the Ethiopian context, which has been relatively underexplored in previous studies.
Relevant answer
Answer
I consider the implications of your proposed research topic to be of much importance to the economic development of your country and the academic community as well. However, I think it would be more appropriate if you can be more specific on the kind of help you need?
Do you need an expert to help you structure your introduction or share brains with you in achieving the overall objective of your study?
  • asked a question related to Time Series
Question
4 answers
Hi,
I've got myself totally confused and need some help.
I have two univariate time series that represent the area of land (in m2) changing from natural to non-natural at both yearly and quarterly intervals over a 12 year period [12 observations in one - 48 in another]. My interest is to identify whether a change in the area occurred following a shift in policy.
I therefore, have looked at using an Interrupted Time Series approach to investigate. Initially I just compared segmented linear regressions as a way of getting an indicator of whether change occurred. However, my data is non-stationary and I am aware, therefore unsuitable for regression in its current format.
Whilst I understand that I can de-trend the data to make it stationary, I worry that such would damage the output. My issue is really confusion around stationarity.........I believe that I am trying to measure the trend and assume that by making the data stationary, the trend [thing i am trying to assess] will no longer be accounted for......
Could someone help to explain to me how stationary data would allow the identification of a change in the trends during the periods both prior to and after the implementation of the policy?
Relevant answer
  • asked a question related to Time Series
Question
3 answers
I have analyzed a time series with MFDFA algorithm. But the general Hurst exponent is from 0.015 to 0.04, quite near to 0. And the width of the multifractal spectrum I got is about 0.04, very narrow. So I wonder whether this time series is fractal or multifracal? And what does general Hurst expoent equal to 0 mean?
Relevant answer
There are no movements or they are cyclical with a very high oscillation frequency.
  • asked a question related to Time Series
Question
4 answers
I am a doctorate degree student, working on mt thesis.
Relevant answer
Answer
Both are good. However, for convenience and flexibility, I would recommend STATA. Oher options are R and MATLAB
  • asked a question related to Time Series
Question
5 answers
how can i calculate time delay and embeded dimention in Mackey-Glass chaotic time series is more important?
and how can we get it in mathlab?
Relevant answer
Answer
In Mackey-Glass and other chaotic time series, shorter time delays are generally more important as they capture the underlying dynamics more accurately and prevent overfitting.
  • asked a question related to Time Series
Question
9 answers
Here is the case, as I said, I am working on how Macroeconomic variables affect REIT Index Return. To understand how macroeconomic variables affect REIT which tests or estimation method should I use.
I know I can use OLS but is there any other method to use? All my time series are stationary at I(0).
Relevant answer
Answer
You can use econometric methods such as regression analysis, Vector Autoregression (VAR), or Granger causality tests to analyze how macroeconomic variables affect REIT index returns.
  • asked a question related to Time Series
Question
3 answers
Hi!
I want to use the ADL model for my data analysis. However, after performing a stationary test, dependent and 6/8 independent variables are stationary only in differences. The other two are stationary in levels.
Is the cointegration test always necessary?
If so, I found on the Internet that I can only use the Pesaran Bounds test because I have a mix of I(0) and I(1) variables. Is it true? I am not sure.
And how do you perform that test?
Thanks a lot for your suggestions.
Relevant answer
Answer
After performing a stationary test for your data analysis using the ADL model, you have found that the dependent variable and 6 out of 8 independent variables are stationary only in their differences, while the other two are stationary in levels. In this scenario, you can proceed with modeling your data using an Autoregressive Distributed Lag (ADL) model.
Adam Tomko Moges Mengstu Kassaw Srk Haqbin Benjamine Gaspar Miku The ADL model is suitable for situations where variables exhibit different stationarity properties, such as some being stationary in levels and others in differences.The ADL model allows for the inclusion of lagged values of both the dependent and independent variables, accommodating the mixed stationarity properties of your variables. By incorporating lagged values of the variables that are stationary in differences, you can capture the short-term dynamics and relationships in your data. At the same time, including the variables that are stationary in levels enables you to account for the long-term equilibrium relationships.
This approach aligns with the flexibility of the ADL model, which can handle variables with diverse stationarity characteristics, making it a suitable choice for your data analysis scenario. By appropriately specifying the model with the lagged terms of the variables based on their stationarity properties, you can effectively capture the dynamics and relationships within your dataset. Adam Tomko
  • asked a question related to Time Series
Question
2 answers
I'm currently in a research project on wavelet transform denoising. Due to lack of statistical knowledge, I'm not able to do research on thresholding method, so I'm curious if there are any other research directions(more prefer an engineering project), thank you for your answer.
Relevant answer
Answer
Wavelet denoising is used in radar and sonar. e.g.
  • asked a question related to Time Series
Question
5 answers
For Example the sales of Mobile Phones is increasing in a country. If we want to study that does introduction of a new technology in mobile phones change/shift the growth trend in sales of mobile phones? Which statistical tools are best suited for this type of study.
Relevant answer
Answer
Use correlation or regression methods
  • asked a question related to Time Series
Question
4 answers
I want to know the time-series characteristics of demand for manufactured products. Is there a good paper to know the state-of-the-art research on how the demand of manufactured goods fluctuates or grows? If there is no such paper, do you know any papers that treated or questioned related problems?
Relevant answer
Answer
thank you vey much. Your information was vey useful. I have downloaded all three papers.
Best,
Yoshinori
  • asked a question related to Time Series
Question
3 answers
Dear Academics,
I have two times series with 12 observations for each. Both are yearly quantitive data for last 12 years. I presented as graphs and it seems they have negative correlation.
How can I show the relationship between them statistically (first series effect on second or at least correlation)? Which tests should I perform?
Datas are non-stationary.
Relevant answer
Answer
  • asked a question related to Time Series
Question
3 answers
Hello guys
I want to employ FMRI for conducting research.
At first step, I want to know FMRI data is an image like MRI.
Or I should behave with FMRI like time-series when it comes to analyzing data
thank you
Relevant answer
Answer
MRI datasets typically result in high-resolution three-dimensional images representing anatomical structures. These images are often stored in formats such as DICOM (Digital Imaging and Communications in Medicine) or NIfTI (Neuroimaging Informatics Technology Initiative). fMRI datasets produce time-series data representing changes in brain activity over time. These data are often stored in formats compatible with neuroimaging software packages, such as NIfTI, Analyze, or MINC (Medical Imaging NetCDF). fMRI data can be conceptualized and analyzed both as images and time-series. The choice of representation depends on the specific research question and analysis techniques being employed. For many analyses, researchers will use both approaches, leveraging the spatial information provided by the image-like representation and the temporal dynamics captured in the time-series data.
  • asked a question related to Time Series
Question
1 answer
I've fitted a latent growth mixture model to time series data. It consists of a value (population prevalence) at 11 time points for a sample of 150 areas. Said model was fitted using the lcmm package in R and identified a two class model as optimal - reflecting an hlme model as follows:
gmm2 <- gridsearch(rep = 1500, maxiter = 50, minit = gmm1, hlme(Value ~ jrtime, subject = "ID", random=~jrtime, ng = 2, data = rec, mixture = ~ jrtime, nwg=T))
Said model uses the "Value"/prevalence data for each area as the primary variable. However, the original "Value" column within the data also relates to 95% confidence intervals (reflecting different sample sizes which contributed to the observations at each time point and for each area). Under the current approach all observations of "Value" are treated equally. Should I (and is there a good method through which to) account for the differing levels of uncertainty in "Value" as part of my lcmm?
I wondered if such could be coded into the package, but this does not appear to be the case. Therefore I wondered whether a manual account could be taken (such as adding the CI range as a covariate)? However, I also wondered whether there was scope to add such as part of the prior setting if applying an alternative Bayesian approach.
Any advice/links to relevant literature (or shareable code) would be hugely appreciated.
Relevant answer
Answer
Yes, it is important to account for uncertainty Latent Class Growth Mixture Models (LCGMM), these models are used to identify homogeneous subpopulations within a heterogeneous population, and it also help to identify meaningful groups or classes of individuals.
  • asked a question related to Time Series
Question
1 answer
Hello everyone!
I'm currently working on a time series of vegetation indices, and a key question I have is which method of atmospheric correction to use. Is it absolutely crucial for achieving accurate results, or is it something I can potentially skip? I would appreciate some insights on this matter.
Thank you!
Relevant answer
Answer
I hope you are doing well. I have previously worked in something similar, so one input I can give you is that when analysing time series data, it is vital to try and standardize the whole data from the first to the last entry. Even though the vegetation indices cancel out many annoying things, it is still very robust to use atmospheric corrections for all the data as you want all other variables (other than your target index) to be as less varying as possible.
I know this is vague but I'll need more information from you to discuss this any deeper.
Best wishes.
  • asked a question related to Time Series
Question
5 answers
we only assume that the time series may have one breakpoint in pettitt test, How does the pettitt test work if we have two or more time series breakpoints??
Relevant answer
Answer
A late answer maybe for other people. To the best of my knowledge, the Pettitt test hasn't been extended for more than one breakpoints: the extension is challenging due to the nonparametric nature of the test. Model-based statistical techniques may be more relevant to handle complex situations (e.g., with an unknown number of changepoints). In R, there are many packages dealing with this such as the changepoints and bcp packages. If anybody cares a Bayesian method, one choice is the Rbeast package I recently developed; it has been used in dozens of fields and is available in R, Python, and Matlab: https://github.com/zhaokg/Rbeast.
  • asked a question related to Time Series
Question
1 answer
Hey,
I'm pretty new to 3D kinematic analysis in sports, and I'm trying to follow this "protocol", i.e. the exact structure of results as in this article: https://peerj.com/articles/10841/
However, I think I understand how they are calculating the angles at key events and ROM, but I'm not sure how they are calculating the "angular changing rate".
As a data, I have a time series of angular velocity and acceleration. But how do you get just "one number" from time series? Is it also at key events, or can I calculate the "angular changing rate" leading to having just one number from a time series?
Thanks!
Relevant answer
Answer
Hi. If a = 20 deg. b = 50 deg.
When the time at a is 0 s, b is 0.2 s. So, the event from a to b are 0.2 seconds And the anguler change rate that is (50-20)/50, the unit is %. So, the ROM is 50-20, unit is deg.
Best regard
  • asked a question related to Time Series
Question
2 answers
Dear Researcher, 
I am using SIMHYD hydro logical model. This model requires input in Tarsier daily time series format (.tts). I have all information in excel sheets, kindly guide me how to convert data from excel form to Tarsier daily time series format.
Relevant answer
Answer
I also have the same question. If you found the way can you please guide me.
I really appreciate your help.
Kind regards,
Sineka
  • asked a question related to Time Series
Question
5 answers
Suppose, I have daily data for a certain time series variable. However, if I want that variable to use quarterly data for research, how should I organize the data?
Relevant answer
Answer
Through the following.
  • Interpolation - This will help you to convert a higher frequency data to a lower frequency.
  • Aggregation - This involves grouping the daily into quarterly and then applying aggregation function such sum, average or count.
  • asked a question related to Time Series
Question
6 answers
What is the short new way for you to solve this problem of data analysis in time series?Suppose you have time series data. What steps do you take and how to analyse this data? How to solve it in your work?Follow me and share with me your post and your personal experience. 
  • asked a question related to Time Series
Question
2 answers
What is the short new way for you to solve this problem of data analysis in time series? Suppose you have time series data. What steps do you take and how to analyse this data? How to solve it in your work? Follow me and share with me your post and your personal experience.
Relevant answer
Answer
In time series data analysis, a concise approach involves several key steps:
1. Data Exploration:
- Understand the characteristics of the time series.
- Check for trends, seasonality, and outliers.
2. Data Preprocessing:
- Handle missing values and outliers appropriately.
- Consider normalization or scaling if needed.
3. Visualization:
- Plot the time series to gain insights.
- Use tools like line plots, histograms, or box plots.
4. Feature Engineering:
- Extract relevant features, such as rolling statistics or lag features.
- Consider transformations for stationarity.
5. Model Selection:
- Choose an appropriate model based on characteristics (ARIMA, SARIMA, LSTM, etc.).
- Split data into training and testing sets.
6. Training:
- Train the selected model on the training set.
7. Evaluation:
- Evaluate model performance on the test set.
- Use metrics like Mean Absolute Error (MAE) or Root Mean Squared Error (RMSE).
8. Optimization:
- Fine-tune model parameters if needed.
- Consider ensemble methods or hybrid models.
9. Prediction:
- Use the trained model to make predictions on new data.
10. Validation:
- Validate results against actual outcomes.
- Adjust the model or methodology if necessary.
Each work involves implementing these steps using statistical and machine learning techniques, and it often requires adapting strategies based on the specific characteristics of the data. It's crucial to stay mindful of the context and the goals of the analysis throughout the process.
  • asked a question related to Time Series
Question
1 answer
Hello Everyone !
I am currently analyzing the emissions of around 300 companies over a time spans of +- 20 years (time series data !). I am wondering what is the best way to approach the analysis of this dataset and what methods can I use to draw insights from my dataset.
I was thinking about starting with indexing my dataset (since companies dont have the same volume of emissions) and then average these indexes according to specific characteristics of the companies (ex: size, country, etc...) in order to attempt to pick up trends.
After the descriptive statistic analysis, I was thinking that I could top my analysis with a regression analysis of emissions according to the type of company (inv. company, state-owned, etc...). For that matter, is there a specific statistical test I can do to regress time series data according to a specific independant variable ?
Let me know what you think of this approach...I am listening to your comments !
Cordially,
Diego Spaey
Relevant answer
Answer
Hi,
Your approach is good. Use ARIMA for trend analysis and panel data regression for regression analysis, ensuring to check for stationarity and autocorrelation in your time series data.
Hope this helps.
  • asked a question related to Time Series
Question
3 answers
I am interested to study on private investment and its determinants using time series data. want to understand the possible way of adopting time series data below 30 years. Is that possible to use? Thank you for your information.
Relevant answer
Answer
Whether your T is large enough for your analysis depends upon several factors, especially on the model specified and the estimator employed. However, values of T around 30 can be considered sufficiently large in many empirical applications.
  • asked a question related to Time Series
Question
4 answers
Hello,
We are trying to analyze data from different patients. Here's the summary for the entire cohort:
1) Each patient will be followed over time.
2) Each patient will be given a starting dose of a compound, and a certain lab readout will be generated for that time point at that starting dose.
3) As time progresses, the dose will be increased/maintained, and a different readout will be generated by then. The dose adjustment will be done on a clinical basis.
Our question is:
We wanted to know whether increasing the dose would significantly change the readout at that certain time point. Two-way ANOVA (or a repeated two-way ANOVA) will erroneously provide us with a result. Which tool/statistical test would be appropriate for this scenario?
Thanks!
Relevant answer
Answer
Statistical tool or test can be used for dose-response curve with time component is the nonlinear regression. You can use nonlinear regression to quantify the fold shift in the dose and its confidence interval, and to compute a P value that tests the null hypothesis that there was no shift.
That method is only applicable when the number of means is greater than 5, otherwise you must use multiple means comparison methods such as Dunca, Tukey, Newman-Keuls, etc. However, the statistical design issues related to dose-ranging studies are not always keenly understood and a poorly designed study can be costly for later development.
  • asked a question related to Time Series
Question
1 answer
For example, There is no doubt that global sea level is rising, and based on the global mean sea level (GMSL)data, we can calculated the trend of the GMSL. However, we all know that that must be some interannual/decadal variations of the GMSL, and even the alising errors of our data. We can get the linear trend of GMSL timeseires based on least-square method. However, how can we estimate the uncertainty range of this trend? 1, GMSL timeseires have autocorrelation; 2, the variations of GMSL timeseries are not the white noises, the standard deviation of GMSL anomalies is not 1.
Relevant answer
Answer
I suggest that you employ an ARCH/ARMAX method where the ARCH component models the conditional variance of the error term, the ARMA component models the autoregressive nature of your data, and the X component models the effects of the exogenous variables.
Here is a link to a recent application of the method:
  • asked a question related to Time Series
Question
4 answers
In time series analysis under what circumstances would the multiplicative model be preferred to additive model
Relevant answer
Answer
In time series analysis, whether to use a multiplicative model or an additive model depends on the characteristics of the underlying data. Here are some considerations that might make the multiplicative model more appropriate:
  1. Proportional Seasonal Variation: If the seasonal patterns in the data exhibit proportional changes relative to the level of the series, then a multiplicative model is more suitable. In other words, if the amplitude of the seasonal variation increases or decreases as the level of the series changes, a multiplicative model is preferred.
  2. Varying Volatility: When the variability of the data increases or decreases with the level of the series, a multiplicative model may be more appropriate. For example, if the percentage fluctuations in the data increase with the level of the series, a multiplicative model is often more accurate.
  3. Non-Constant Growth: If the trend in the time series exhibits a non-constant growth rate, the multiplicative model is more suitable. In such cases, the growth rate is proportional to the level of the series.
  4. Seasonal Patterns Expressed as Percentage of the Mean: If the seasonal patterns are more appropriately expressed as a percentage of the mean, a multiplicative model may be preferred. This is common when the amplitude of seasonal fluctuations varies with the overall level of the series.
In contrast, an additive model is more appropriate when the seasonal variations and other components are consistent in magnitude regardless of the level of the series. The choice between a multiplicative and additive model is not always clear-cut, and sometimes experimentation with both models is necessary to determine which one provides a better fit to the data. Additionally, the nature of the data and the specific characteristics of the time series should guide the choice of the appropriate model.
  • asked a question related to Time Series
Question
1 answer
I encountered a problem converting google earth engine data to ASCII format, in order to extract NDVI time series on TimeSat.
Relevant answer
Answer
To export time series data from Google Earth Engine to ASCII format for use in TimeSat, you can follow these general steps:
  1. Get NDVI Time Series from Google Earth Engine:Use Google Earth Engine to fetch the NDVI time series data for your region of interest. You can use the ee.ImageCollection and ee.Reducer to calculate the mean NDVI for each image in the collection.Here's a simplified example in JavaScript: var collection = ee.ImageCollection("MODIS/006/MOD13Q1") .select("NDVI"); var meanNDVI = collection.reduce(ee.Reducer.mean());
  2. Export Data to Google Drive:Use the Export.image.toDrive function to export the data to Google Drive. Specify the region, scale, and other parameters as needed. Export.image.toDrive({ image: meanNDVI, description: 'NDVI_TimeSeries', scale: 250,region: yourRegionOfInterest, maxPixels: 1e13 });
  3. Download from Google Drive:Once the export is complete, you can download the exported file from Google Drive. The file will be in GeoTIFF format.
  4. Convert GeoTIFF to ASCII:Use a GIS software or a tool like GDAL to convert the GeoTIFF file to ASCII format. You can use the gdal_translate command in a terminal: gdal_translate -of XYZ input.tif output.xyzThis command converts the GeoTIFF file to XYZ ASCII format.
Now, you have an ASCII file that can be used as input to TimeSat. Adjust the steps based on your specific requirements and the dataset you are working with.
  • asked a question related to Time Series
Question
2 answers
I have several IONEX files for multiple GNSS stations, and I would like to calculate the vertical and slant TEC values and store them in a time series data frame. Are there any software packages or reference material available that can help me read those IONEX files please?
Relevant answer
Answer
I have used a MATLAB code to extract it from IONEX files, added coordinates and time column and save it to excel file, if this is what you want to do, I can provide you with that code.
  • asked a question related to Time Series
Question
2 answers
I am dealing with a company profile that was established in 2005. For most of the variables, the information is available either till 2021 or 2018.
My concern is whether a time series ARDL application for 2005-2018 or 2005-2021 will be enough to report acceptable results or not. Since the company is established recently, there is no other way to increase the time period also. If possible, is there any study or reference that has applied such short time period data for time series analysis?
Thank you in advance for all your suggestions and opinions.
Relevant answer
Answer
Hi Nivaj,
please take a look at the following link:
I see that the author uses time series analysis
Best,
Apostolos
  • asked a question related to Time Series
Question
4 answers
is Toda-yamamoto test used for long or short term ? and what is the difference between long and short term? short term is 5 years? more less? can we apply Toda-yamamoto to 10 years data or not? if not is there any alternative test?
Thank you
Relevant answer
Answer
Hi Fatima,
I think that it will be very helpful (especially if you use the Eviews software). It explains the procedure step by step.
The test does not distinguish between long and short-run effects.
  • asked a question related to Time Series
Question
6 answers
Dear Scholars,
In financial time series modelling, the usual practice is to model the financial variable using the log-return. Why can't we model the financial time series using the price? I am looking forward to read your professional responses soon.
#TIA
Relevant answer
Answer
Saheed Busayo Akanni Probably because the range is too large to be shown easily and clearly on a linear axis. Logarithms will compress the data. Consider trying to show $1, $100, $1 million, $1 billion dollars on a linear axis. Consider the same situation with logarithms.
  • asked a question related to Time Series
Question
1 answer
Time series
Relevant answer
Answer
What is your question?
  • asked a question related to Time Series
Question
7 answers
Good morning,
for my research project, I am using school meal data selection. I would like to investigate the children's food selection patterns using multiple time series using the K-means method. Given the remit of the study, in my data, I have missing data due to data collection during school and bank holidays, weekend generating breaks in food selection values. When you investigate a phenomenon on a daily scale, how do you manage these kinds of missing values? Do you change the temporal scale for example month rather than day, keeping breaks in graphics, or perform an imputation?
Relevant answer
Your efforts are appreciated
  • asked a question related to Time Series
Question
9 answers
In my research study, i have four annual time series and i I want to convert them to monthly series while retaining the wide fluctuations in the original series. knowing that the curves of these annual series are approximately exponential.
Relevant answer
Answer
No Viktor Zadachyn it is possible provided you have a related monthly time series with the proper periodicity. Of course it will differ from the true (unknown) monthly time series.
  • asked a question related to Time Series
Question
4 answers
my research study is about financial price modeling, i have several times series in which the frequency of some series is monthly, while the frequency of others is annual. my objecyive is to convert the annual series to montjly data but, what is the relevant statistical method in this case?
Relevant answer
Answer
@Davood-Omidian yes i understand your answer, it is very important, I will try it, thank you very much
Best regards
  • asked a question related to Time Series
Question
3 answers
how can I obtain free precipitation, temperature, and potential evaporation data for a specific lake for the past 20 years?
Relevant answer
Answer
Dear Saba Moshtaghi,
The availability of historical climate data can fluctuate based on the specific location and the particular parameters of interest. To ensure accurate data retrieval, it is crucial to possess the precise name and geographic coordinates, including latitude and longitude, of the lake in question.
Numerous government agencies, such as the National Weather Service, diligently gather and manage climatic data for various locations, including lakes. Additionally, you can explore organizations and websites that offer access to climate data, such as NASA's Earthdata Search and the World Meteorological Organization.
It is worth noting that certain datasets may necessitate permissions or subscriptions for access, requiring initial registration. Furthermore, depending on your research objectives, it may be advisable to consider data from at least three nearby weather stations and compute an average. This becomes especially critical when dealing with rainfall data, as precipitation patterns can vary significantly both temporally and spatially.
humble regards,
  • asked a question related to Time Series
Question
1 answer
I have 5 time series data points, of which 3 are absolute values and 2 are percentage terms. After converting the absolute values into log form, can I use ARDL?
Relevant answer
Answer
Yes, you can use ARDL for time series data analysis even if some of the variables are in absolute values and some are in percentage terms. ARDL (Auto Regressive Distributed Lag) is a method that can be used for modeling the relationship between two or more time series variables. It is particularly useful when the variables are non-stationary, meaning that they have a trend or a seasonality component that needs to be accounted for. ARDL can handle both stationary and non-stationary variables, and it can also handle mixed orders of integration, such as I(0) and I(1)
Therefore, you can use ARDL to model your time series data that contains both absolute values and percentage terms. @ritu
  • asked a question related to Time Series
Question
4 answers
kindly provide the answer to my question sir/madam.
Relevant answer
Answer
There could be many options for detrending or fitting a trend. In Python, one choice is to fit a nonlinear trend and get the remainder as the detrend signal using. One such tool is this package at https://pypi.org/project/Rbeast/ where in the second example, the trend is removed, leaving only the seasonal variation. But depending on your use scenarios, a simple linear regression may be sufficient for the detrending purpose.
  • asked a question related to Time Series
Question
3 answers
Kindly provide with references
Relevant answer
Answer
Many options are possible. If you are using R, let me borrow from the time series CRAM task view:
Change point detection is provided in strucchange and strucchangeRcpp (using linear regression models) and in trend (using nonparametric tests). The changepoint package provides many popular changepoint methods, and ecp does nonparametric changepoint detection for univariate and multivariate series. changepoint.np implements the nonparametric PELT algorithm, changepoint.mv detects changepoints in multivariate time series, while changepoint.geo implements the high-dimensional changepoint detection method GeomCP. Factor-augmented VAR (FAVAR) models are estimated by a Bayesian method with FAVAR. InspectChangepoint uses sparse projection to estimate changepoints in high-dimensional time series. Rbeast provides Bayesian change-point detection and time series decomposition.
Of these, the Rbeast package is maintained by me and also available in Matlab and Python. If interested, see more infor at https://github.com/zhaokg/Rbeast.
  • asked a question related to Time Series
Question
3 answers
Could you please provide me with 2 or 3 Elsevier or Springer articles that utilize this formula:
LST = BT / (1 + w * (BT / p) * ln(e))?
Relevant answer
Answer
Check Google
  • asked a question related to Time Series
Question
5 answers
What kind of mining techniques are used in spatial databases and difference between temporal data and time series?
Relevant answer
Answer
Dr Biswajit Sarangi thank you for your contribution to the discussion
  • asked a question related to Time Series
Question
3 answers
What kind of mining techniques are used in spatial databases and difference between temporal data and time series?
Relevant answer
Answer
Dr Biswajit Sarangi thank you for your contribution to the discussion
  • asked a question related to Time Series
Question
2 answers
Discussion of issues related to the use of Neural Network Entropy (NNetEn) for entropy-based signal and chaotic time series classification. Discussion about the Python package for NNetEn calculation.
Main Links:
Python package
Relevant answer
Answer
Dear Dr. Mohammad Imam Your comment does not carry a semantic load, it looks like a general instruction for creating a package for Python
  • asked a question related to Time Series
Question
6 answers
In my research study, I'm trying to model financial series using two series as data bases: the first series is monthly, and the second series is daily.
I've tried to render the second series monthly as well, by taking the instantaneous average of each month, but I've noticed that the wide fluctuations that make my research so interesting have disappeared.
how can we render a daily time series into a monthly series while retaining the effect of extreme values in the series?
what is the appropriate measure or statistical technique in this case?
Relevant answer
Answer
There are several ways to convert a daily series into a monthly one. Popular methods include
  1. Taking the average (arithmetic or geometric).
  2. Taking the value at the end of the month
  3. Taking the value at the middle of the month.
  4. Taking the value at the beginning of the month.
  5. Taking the maximum of the series in a month,
  6. Take the minimum of the series in a month, or
  7. the difference between that maximum and minimum.
  8. Get an estimate of the monthly variance of the series
What you do now depends on the nature of your monthly variable and the process you are trying to capture. Look at the economics or finance of the process. For example, if you think that your monthly variable is dependent on the volatility and level of your daily series, you might consider using one of 1 to 4 for the level and 7 or 8 for the volatility. There are many combinations that might be useful but that is up to you. It may also be necessary to log transform your variables before analysis.
  • asked a question related to Time Series
Question
3 answers
My dependent variable consists of cross-sectional data with 8 observations, while the independent variables consist of time series data with 45 observations.
Relevant answer
Answer
It depends on the nature of your study and the data you have. Can you please provide more details about your study?
  • asked a question related to Time Series
Question
3 answers
I am looking for time series data collected from sensors (presence, temperature, contact, RFID, etc.) in a Smart building.
I would like to obtain data allowing me to better understand the occupants of a building and to adapt to their rhythm of life and their needs.
Relevant answer
Answer
thanks Hamid
this dataset is befit. however, I would like to find a dataset on the case of a house that contains several rooms (bedrooms or other places in the house like bathroom, kitchen, etc.)
yours
  • asked a question related to Time Series
Question
6 answers
I'm expecting to use stock prices from the pre-covid period up to now to build a model for stock price prediction. I doubt regarding the periods I should include for my training and test set. Do I need to consider the pre-covid period as my training set and the current covid period as my test set? or should I include the pre-covid period and a part of the current period for my training set and the rest of the current period as my test set?
Relevant answer
Answer
To split your data into training and test sets for predicting stock prices using pre-COVID and current COVID periods, consider using a time-based approach. Allocate a portion of data from pre-COVID for training and the subsequent COVID period for testing, ensuring temporal continuity while evaluating predictive performance.
  • asked a question related to Time Series
Question
3 answers
I want to extract patent indices in the field of solar energy and analyze them with the time series method
Does anyone know which patent indicators can be analyzed in the context of time series?
thank you
Relevant answer
Answer
Some patent indicators that can be analyzed in the context of time series are:
- Patent applications and grants
- Patent families
- Patent citations
- Patent classifications
  • asked a question related to Time Series
Question
3 answers
My article focuses on the changes in Land surface temperature, vegetation, and waterbodies over a long time in an area by using Landsat and Modis data with a new methodology.
Relevant answer
Answer
#Remote Sensing of Environment (Recommended) (link: https://www.sciencedirect.com/journal/remote-sensing-of-environment)
#Journal of Remote Sensing (in partnership with science) (link: https://spj.science.org/journal/remotesensing)
Access the link and find out if there is any other: https://www.gisvacancy.com/remote-sensing-journals/
  • asked a question related to Time Series
Question
6 answers
how to deal with missing values in times series data?
Relevant answer
Answer
The problem should be treated in a deeper manner:
(1) How large is the sample size, and what is the share of missing values?
(2) What is the type of your time series? In which field is your approach?
(3) What is the time measure? Intra-day?, dayli, monthly, sub-annual, etc.?
(4) Where are the missing values situated? At the begining, at the end of the time serie?
After you answer on these questions you may search for an appropriate method of inputation, like Chuck and Mirajul already mentioned.
  • asked a question related to Time Series
Question
4 answers
I am doing Trend analysis. When I was doing Homogeneity tests ( Pettitt, SNHT test, Buishand, von Neumann) on Precipitation and Temperature time series using XLSTAT, I found that a great number of my Temperature data are inhomogeneous. Can anybody tell me How can I make them homogeneous data?
Relevant answer
Answer
Use normalization techniques such as min-max normalization or z-score normalization. Another method is to use feature scaling techniques such as standardization or normalization.
  • asked a question related to Time Series
Question
1 answer
The available memory keeps changing even if no file is open. The files that I want to open are large files of around 7-10 GB
Relevant answer
Answer
One possible solution is to increase the amount of memory allocated to ImageJ. You can do this by going to Edit > Options > Memory & Threads and increasing the maximum memory allocation. Another solution is to try closing other programs that are running on your computer to free up memory.
  • asked a question related to Time Series
Question
7 answers
Hello,
I have two time-series, A and B; what test can I apply to check if they differ?
I could compare the values between A and B at specific time points; for instance, is there a difference between the measures for A and B at t=24 hours?
But, is there a better way to compare time points?
Thank you
Relevant answer
Answer
I am afraid that the suggestions given by Chuck A Arize do not answer the question. You would like a kind of t-test but the problem is that the two time series are not random samples. If the series are stationary, it can probably be done but taking the autocorrelation into account but I don't know how. If the time series are not stationary, there is little hope. Good luck
  • asked a question related to Time Series
Question
3 answers
Is there a way to estimate the length of each period or seasonality in a time series in real-time? This is useful for adaptive and online time-series processing (streaming) where the entire data is not available a-priori.
Each observation may arrive sequentially or in a batch of observations. We want to be able to analyse observations as they come without having to store the entire historical data; though we may be able to store the most recent observations. 
Relevant answer
Answer
One way to estimate the length of each period or seasonality is to find the frequency corresponding to the maximum of the spectral density. However, the spectrum at low frequencies will be affected by trend, so you need to detrend the series first.
Another way is to use functions such as seasonal_decompose and STL (Python statsmodels package) or models like SARIMA that have a period or cycle parameter that indicates ‘the period of the series’ used (period, seasonal, etc)
  • asked a question related to Time Series
Question
4 answers
My dependent variable consists of cross-sectional data with 8 observations, while the independent variables consist of time series data with 45 observations.
Relevant answer
Answer
A time series regression model will have the form
y_t = \beta_1 + \beta_2 x_{2t} + \beta_3 x_{3t} + \dots + \beta_k x_{kt} +u_t
where y is time series of the independent variable and
x_2,x_3,\dots,x_k are explanatory time series variables. (Certain conditions must be satisfied if the coefficients are to be estimated. There are additional problems to be addressed if any of the series are non-stationary)
While I have seen studies with 30 observations of each variable that have provided useful results generally considerably more observations are usually required.
Look at your data. As far as I can tell you do not have the necessary time series independent variable to do a time series regression. Perhaps I still do not understand.
  • asked a question related to Time Series
Question
3 answers
My Dear
I have a series as y (40 values from sales) and need to use neural networks in a matlab symlink to forecast the future values of y as in times(41,42,43,......50)
Relevant answer
Answer
An Artificial Neural Network (ANN) is an information processing paradigm that is inspired by the way biological nervous systems, such as the brain, process information. The key element of this paradigm is the novel structure of the information processing system. It is composed of a large number of highly interconnected processing elements (neurones) working in unison to solve specific problems. ANNs, like people, learn by example. An ANN is configured for a specific application, such as pattern recognition or data classification, through a learning process.
Regards,
Shafagat
  • asked a question related to Time Series
Question
5 answers
When we do GARCH, we make time series stationary.
Stationary means constant mean and variance.
Then how are we modelling variance with GARCH when its stationary?
What am I missing here.
Also, can I find daily standard deviation for a stationary time series, using the rolling window method? Same logic, why am I getting different daily S.D. for a stationary time series using rolling window method?
Relevant answer
Answer
If a time series is stationary (constant mean and variance), then we can model the volatility with GARCH. GARCH models are used to model the volatility of time series data that is not stationary. However, if the time series is stationary, then we can use GARCH models to model the volatility of the time series.
  • asked a question related to Time Series
Question
1 answer
hello, I have several parameters (chlorophyll, primary production) that I measures at two different marine stations during 2 years (it 's a time series). I need to do a statistical test to look at are there differences between these two locations, despite the fact that the planktonic communities are also changing over time. My data ara not normal so I need a non parametric test . Thanks for your help!
Relevant answer
Answer
You can use non-parametric tests to compare two marine stations over 2 years. Non-parametric tests are valid for both non-Normally distributed data and Normally distributed data. Some examples of non-parametric tests include the Mann-Whitney U test and the Wilcoxon signed-rank test. The Mann-Whitney U test is used to compare two independent groups. The Wilcoxon signed-rank test is used to compare two dependent groups.
  • asked a question related to Time Series
Question
6 answers
Hello,
Is this true in all cases: "Before applying the Augmented Dickey-Fuller (ADF) test, it is generally recommended to remove or account for the trend and seasonality components of the time series."
Thanks
F CHELLAI
Relevant answer
Answer
This is incorrect. The ADF test (or its variants) makes no such assumption. In fact, the ADF test is used to do a preliminary assessment of the stochastic property of a given time series, namely, trend-stationary vs. differenced-stationary vs. both.
Prof. Ahking
  • asked a question related to Time Series
Question
13 answers
It is a usual practice of calculating CV for rainfall/precipitation data after detrending the time series as suggested by many authors like (Giorgi et al. 2004; Blazquez et al. 2013). "Say, I have total winter rainfall data in a single time series. I calculated the detrended time series by subtracting the linear trend (or the fitted values of  the linear regression) from the actual data. I got both positive and negative values in the detrended time series (Residual). If I calculate the CV (SD/mean) of this time series, the values are infinite as the mean of the time series is nearly zero".  
Please kindly guide me. I want to know, where am I doing wrong?
Relevant answer
Answer
That method for detrending precip. (avoiding negative precip., conserving dry days as dry days, and conserving the mean) has been published, as part of the TRANSLATE project in: https://www.frontiersin.org/articles/10.3389/fclim.2023.1166828/full . Please cite that source if you use this method. It's a good, simple method!
  • asked a question related to Time Series
Question
3 answers
I am using a Durbin-Watson test as a method through which to test for autocorrelation in a time series. Said data forms the basis of an interrupted time series analysis. My question is whether the absence of detected autocorrelation in the DW test on a simple model (OLS) is sufficient to inform modelling thereafter (ie. should I undertake the DW test on other plausible model types or does the DW test on an OLS suffice)?
Relevant answer
Answer
Hi,
Just to mention (refere to wikipedia), the Durbin–Watson statistic, while displayed by many regression analysis programs, is not applicable in certain situations. For instance, when lagged dependent variables are included in the explanatory variables, then it is inappropriate to use this test. Durbin's h-test (see below) or likelihood ratio tests, that are valid in large samples, should be used.
Hamid
  • asked a question related to Time Series
Question
3 answers
Hello, I am doing my MSc Thesis and using Mann-Kendall trend analysis for discharge flows.
To implement the Mann-Kendall test, is it obligatory to use uncorrelated time series? I checked for autocorrelation in my time series (1 lag autocorrelation and significance level 5%) and found that my time series is autocorrelated.
After implementing the original Mann-Kendall and Mann-Kendall Test with Trend-Free Prewhitening, I realised that the autocorrelation of the original Mann-Kendall had overestimated the trends.
However, should I include in the results for both the original Mann-Kendall and commend the difference with the Mann-Kendall Test with Trend-Free Prewhitening? Or is it wrong to use the results of the Original Mann Kendall because the time series is autocorrelated?
Thank you in advance for any help!
Relevant answer
Answer
Akrivi Alexandraki , Even annual data might have a correlation in my sense. For example, if your data has a clear trend over time it may show a correlation. Autocorrelation is the relation between lagged values. So, seasonality is not mandatory for autocorrelation.
  • asked a question related to Time Series
Question
8 answers
What is the best method for smoothing data if there are both negative and non-negative time series?
Relevant answer
Answer
One way to deal with both negative and positive time series in a single econometric model is to use the absolute value of the time series. Another way is to use logarithmic transformation. The logarithmic transformation can be used when the data contains both positive and negative values. The log transformation can be applied to the absolute value of the data plus one.
  • asked a question related to Time Series
Question
3 answers
STL Decomposition
Relevant answer
Answer
Seasonal-Trend decomposition using LOESS (STL) is a time series decomposition technique that separates a time series into three components: trend, seasonal, and remainder. The trend component captures the long-term changes in the data, the seasonal component captures the repetitive patterns that occur within a year, and the remainder component represents the random variation or noise in the data.
The STL method uses a non-parametric technique called LOESS (Locally Estimated Scatterplot Smoothing) to estimate the trend and seasonal components of the time series. LOESS uses a moving window to estimate the local polynomial that fits the data, with the window size varying depending on the density of the data points. This allows LOESS to capture complex nonlinear relationships in the data, including seasonal patterns.
Therefore, STL does remove both trend and seasonality from a time series by decomposing the time series into these two components, leaving only the remainder component which represents the random variation or noise in the data. The remainder component is often easier to model and analyze than the original time series since it has had the trend and seasonal components removed. However, it is important to note that the accuracy of the decomposition can be influenced by the choice of parameters, such as the window size used in the LOESS smoothing, and may require some tuning based on the specific characteristics of the data.
  • asked a question related to Time Series
Question
6 answers
I want to apply an ARDL model to a data set of 5 yearly time series (one response and 4 explanatory), spanning from 1971 to 2014. After some research, I found that an ARDL model requires each series to be either I(0) or I(1), therefore a unit root test is necessary. However, when I run the Augmented Dickey-Fuller test in R, I obtain some conflicting results. For example, I have the following series (EI.png). When I run the following test, I obtain results suggesting the presence of a unit root both at the levels and at the differences (TEST1.png):
But when I run the same test on the difference series, the results suggest the absence of a unit root (TEST2.png).
Again, when I put type = "trend", the test shows there is no unit root (TEST3.png).
Can someone please help me understand which of these tests I should report when writing my paper? Can I report a series as stationary if it is trend-stationary? Thank you in advance.
PS: These are all the remaining four series (All.png):
Relevant answer
Answer
Thank you very much for the advice Mr Samuel Oluwaseun Adeyemo.
  • asked a question related to Time Series
Question
5 answers
Hi all,
In time series analysis, when we check the normality assumption, did we should make stationary the time series or not? and if was not necessary, such a thing can violate the iid assumption of the observations?
Regards
Relevant answer
Hello , when do we use the granger casualty test and the cointergration test
  • asked a question related to Time Series
Question
5 answers
Salam,
I'm asking If my data is not time series data, then the stationarity is (or is not )a relevant concern for fitting a multiple linear regression model? and if so, what makes differences with time series data?
Thank you
Relevant answer
Answer
It is not the fact that the data are time series which is important. What is important is the presence of autocorrelation. For example, if your data are about the composition of the soil along a road, for instance, every 10 meters, you will also face autocorrelation despite the fact that the data are not time series. It is known how to handle autocorrelation for regression data but it adds difficulties and it requires typically longer samples.
  • asked a question related to Time Series
Question
2 answers
The dependent variable I chose for the study is stationary at level I(0), and other variables are a mixture of I(0) and I(1) time series data.
Can I use the ARDL model for this study? Which model is appropriate?
In the study of G. De Vita et.al. it says "….. it is necessary to ensure that the dependent variable is I(1) in levels .......". https://doi.org/10.1016/j.enpol.2005.07.016
Relevant answer
Answer
Thank you
Mariam Kamal
,
But here is the bond test result: an evidence of cointegration among the variables ....
  • asked a question related to Time Series
Question
9 answers
Formula for converting the velocity into acceleration of a time series
  • asked a question related to Time Series
Question
5 answers
Does it make sense to build a regression model for time series data with breaks? Like the time series I posted below with one major break that mark the shift of time series feature?
Relevant answer
Answer
Yes, it is possible to develop regression models for time series data with breaks. Remember that this is also known as interrupted time series analysis, which uses regression techniques to model variations in the level or trend of a time series following an intervention or event. These models can be utilised to determine the impact of  changes on a time series variable.
It is essential, however, to ensure that the time series breaks are well-defined and supported by theory or empirical evidence. In addition, the validity of the regression models may be depending on the assumptions made regarding the nature and timing of the breaks as well as the potential confounding variables that may influence the outcome variable. Therefore, models require thorough consideration and validation to ensure accurate results.
  • asked a question related to Time Series
Question
7 answers
Hi,
i'd like to ask for advice with my paper "Comparison of correlations of stock prices before and during the pandemic".
I already have the correlation coefficients calculated and shown in a correlation matrix (attached) for both time periods.
The hypothesis is that the correlation coefficients have changed. As we can see from the above picture, it is pretty clear that they have but i'd like to include some mathematical proof. What statistical test should i use for this? T-test? Two-sample or paired? Is such test even applicable to this use case (e.g. comparing changes in correlation coefficients which were calculated from two different time series datasets)? If not, can you advice a different method?
Thank you
Relevant answer
Answer
covid <- c(171, 0)
res <- chisq.test(covid, p = c(1/2, 1/2))
Chi-squared test for given probabilities data: covid X-squared = 171, df = 1, p-value < 2.2e-16
  • asked a question related to Time Series
Question
4 answers
I am measuring the expression of a fluorescent protein over a period of 4 hours (15 min intervals), testing 4 different conditions with 2 control groups (one positive for expression of the protein, one negative), all in triplicate. The purpose of this experiment is to ascertain what effect each condition has on expression of the fluorescent protein over the period of 4 hours. I've considered running a Two Factor Anova with Replication to ascertain whether the test conditions have a statistically significant effect on the expression of the fluorescent protein over the 4 hour time period, however I've read that this test may not be appropriate to apply to time series data. I am wondering if this is the case and if so what statistical analysis might be appropriate to perform on this data?
Relevant answer
Answer
If you have a time dependent set of data I suggest looking at regression form ANCOVA. David Booth also see Marie Davidian work on longitudinal models.
  • asked a question related to Time Series
Question
2 answers
For a particular pixel across multiple co-registered InSAR images, the amplitude value of the received echo may fluctuate from a mean value based on how sustained/changing is the scattering mechanism by the targets within such pixel over time. The selection of pixels as persistent scatterers candidates used for creating a deformation map through time series analysis of several acquisitions is primarily based upon the amplitude dispersion index thresholding at low values in order to properly estimate the phase stability/dispersion only when the Signal-to-Noise ratio is high enough for such pixels, according to the work of (Ferretti et al., 2001).
Source of the figure:
(PDF) Permanent scatterers in SAR interferometry. IEEE Trans Geosci Remot Sen (researchgate.net)
1) What is meant by phase stability in this case?
2) How is the phase's standard deviation across a time series affected by the amplitude dispersion?
3) How does the contribution of uncompensated propagation disturbances such as the atmospheric phase contribution and the satellite's orbital position inaccuracies besides other sources of noise affect the phase stability ?
Relevant answer
Answer
Hi there,
1 - In this context, phase stability refers to the consistency of the phase values acquired by the InSAR system over time for a particular pixel. A pixel with high phase stability will have phase values that are consistent across multiple acquisitions, while a pixel with low phase stability will have phase values that vary significantly over time. Phase stability is an important factor in InSAR analysis as it directly affects the accuracy of the deformation measurement.
2 - The amplitude dispersion across a time series of InSAR images can affect the phase's standard deviation by introducing noise to the system. When the amplitude of the signal varies significantly over time, it can result in a loss of coherence between the two acquisitions being compared, leading to increased phase noise and decreased phase stability. The selection of persistent scatterers candidates (PSCs) based on low amplitude dispersion index thresholding is done to ensure that the selected pixels have high coherence, which in turn ensures higher phase stability.
3 - Uncompensated propagation disturbances such as atmospheric phase contribution and satellite orbital position inaccuracies can significantly affect the phase stability of InSAR images. The atmospheric phase contribution is particularly problematic, as it can introduce significant phase noise into the system. This noise can be mitigated using various techniques, such as using a weather model to correct for the atmospheric contribution, or by using differential InSAR techniques that cancel out the atmospheric phase contribution. Satellite position inaccuracies can also affect the phase stability, particularly if the satellite's orbit is not accurately known. This can result in phase errors that can be corrected using various techniques, such as using GPS data to refine the satellite orbit.
  • asked a question related to Time Series
Question
1 answer
I have recorded time series of stream temperature over a period of about 6 months with a frequency of 15 minutes. I am trying to attribute the temperature changes to different sources of influence. Therefore, I would like to remove the diurnal signal from my time series so that I am left with whatever else is going on besides the daily warming and cooling.
Are there any methods or papers you can recommend to solve this problem?
For background, the loggers were installed directly over the riverbed, some in shaded areas, some in the open. Most likely there is a tidal influence, which complicates things a bit since the tidal signal has a frequency of about 24 and 12 hours. There may be some groundwater input (probably not) and no tributaries.
Relevant answer
Answer
Dear Lea,
if I understand your query correctly, you want to analyze which factors drive temperature variability on longer than diurnal and tidal timescales in this stream. To smooth out this high-frequency "noise" and detect the signals you are actually after, I'd suggest to calculate 48-hour or even longer averages. These composite averages should yield you a clearer picture of hydrographic temperature variations above the daily/tidal, but still below the seasonal scale. You should, however, also take into account short-term ambient (atmospheric) weather variability, which may produce profound fluctuations in water temperature, especially in the upper layers of the water column. Such variations induced by weather variability are still below the seasonal time scale, but it may sometimes be difficult to discern or disentangle the two signals, especially in transition periods between two seasons or during long-lasting weather patterns. Do you collect meteorological data alongside your potamological parameters?
Hope this helps a bit.
Best,
Julius
  • asked a question related to Time Series
Question
7 answers
Hello Everyone,
I am doing a time series data analysis (ARDL) and there I am using my independent variables as different types of taxes and its impact on income inequality. When I am using these taxes without variables that may affect income inequality I got majority of my tax has a significant and expected sign. So, in that case, do I have to add other variables that may affect income inequality? or is it okay to proceed with my model only with the taxes.
Relevant answer
Answer
Thanks again Nderitu Githaiga for your suggestion.
  • asked a question related to Time Series
Question
5 answers
The data must be present as time series.
Relevant answer
Answer
Real-time data sources for power systems and electrical machines can include sensors that measure parameters such as voltage, current, temperature, and vibration. These sensors can be installed directly on the machines or on the surrounding equipment, and can send data to monitoring and control systems in real-time. Other sources of data can include digital meters, protective relays, and SCADA (Supervisory Control and Data Acquisition) systems. All of these sources can provide important information for monitoring the health and performance of power systems and electrical machines.
  • asked a question related to Time Series
Question
4 answers
I need a Stata code for estimating non-ARDL in time-series. I will prefer the code that will show both the short run and long run results of the main variable and control variables.
  • asked a question related to Time Series
Question
1 answer
I have two raster images. First is time series NDVI derived from Landsat 8 of a year. Other is a cropland data layer having different crops. I need to the the NDVI value for each crop present in my study area in Google earth engine. Please suggest me how I get the required NDVI values.
Relevant answer
Answer
As you have two raster images, you can export those raster images with the same pixel size and convert those images into vectors (point or polygon shapefiles). Then you can perform the overlay analysis (Union or intersection for a particular crop) of two vectors and then you will be able to get the NDVI Values. If you perform overlay analysis for the point shapefile then you will get NDVI values for each pixel of the raster image.
  • asked a question related to Time Series
Question
1 answer
Dear colleagues,
Would you be so kind to advise me where the simultaneous raw data of the brain temperature and EEG time series can be available for public access? I suppose these data are possible mainly to mice or rats.
Relevant answer
Answer
Dear Prof
Papers relating brain temperature dynamics, it's available sir
  • asked a question related to Time Series
Question
4 answers
I have time series data regarding "air temperature" collected from experimental field and in parallel i have also collected the air temperature from local meteorological department for 12 months.
So, right now i am eager to know about relationship between them. Is their any statistical significant difference between them (even in any particular month)?
Please suggest the appropriate statistical test we can perform.
Thanks in advance.
Relevant answer
Answer
Hello Surajit,
First thing is to start with the basics: do the time-series have the same time-interval between data values? Are the measurements taken at the same (or different) times of day? How far apart (i.e., in km) are the 2 stations? Are they at the same or different elevations? What are the means and standard deviations of each? Are your experimental "air temperature" measurements taken with the same precautions for shade, isolation, etc. as the meteorological station data?
You can probably learn a lot just by plotting the timeseries together. That will show any consistent offset (or bias), and time-lags, and give you a good idea of their correlation (which you can calculate systematically in any case). Otherwise, plot each coincident measurement (exp. value, met-station value) as the (x,y) coordinates of a scatter plot and see how closely they line up along the (45-degree) diagonal. All of that should give you some idea of how well (or badly) a missing data-point in one series could be replaced by the corresponding point from the other.
More formally, you can use a t-test to check if the means of both time-series are different, given the null hypothesis that they are the same. But first you should probably subtract the annual cycle from each series, since both timeseries are only normally distributed about that underlying cycle. The variances should also be the same for the t-test to formally apply.
It's also easy to calculate the basic correlation between the 2 series (you can even lag/lead one or the other). You can measure the significance ("p-value") of that correlation coefficient based on its value and number of time-points by first calculating the t-test value, and looking up the p-value from the t-table. (See e.g., https://towardsdatascience.com/eveything-you-need-to-know-about-interpreting-correlations-2c485841c0b8 and google t-table to find the t-table...).
From the information you provide, most likely your 2 timeseries are highly significantly correlated. What might be interesting would be to subtract the physical "signal" from each (e.g., the annual or diurnal cycles, but also synoptic events like frontal passages..), until you are left with just random noise, which should not be significantly correlated. Then you can say here's the physical signal, and here's the random noise, which might really be saying a lot!
  • asked a question related to Time Series
Question
4 answers
i download time series data from
ERA5 monthly averaged data on single levels from 1959 to present
for every month I have a NC file
"units = 'm of water equivalent'
who I can calculate the total precipitation in a year?
Relevant answer
Answer
It is impossible to convert m to km^3/year.
  • asked a question related to Time Series
Question
14 answers
If one has a time series dataset, that contain columns of item number, Date, qty_item_sold. If the frequency of the dataset is 'MS'(Month start) and there are missing value('0.0') in some months due to the lack of purchase orders for those Items how does one handle this type of data set and prepare it for forecasting. Do we drop the rows containing the null values, or do we apply time series missingness mechanisms to fill them in?
I tried dropping the rows and applying statsforecast using models such as AutoArima, AutoETS, Naive. But I don't think the models would are forecasting the dataset properly.
Relevant answer
Answer
thanks so much Anton Rainer. How do I reach you? Maybe through LinkedIn.
  • asked a question related to Time Series
Question
9 answers
Hello , I have 6 variables in my model , time series data 34 year. i am Using EViews10 , i tried to added lag length( maximum lag 3 ), but the model still suffering from serial correlation problem . I have idea that is applied first difference for dependent variable , I applied the idea, after that the serial correlation removed from the model, but I am not sure this idea right or not .
My questions are :
1- How to remove serial correlation for ARDL model?
2- My idea that I applied it right or wrong?
thanks in advance .
Relevant answer
Answer
1- There are several ways to remove serial correlation in an ARDL (Autoregressive Distributed Lag) model:
Increasing the number of lags in the model: If the current lag length is not sufficient to capture the underlying autocorrelation structure in the data, increasing the number of lags may resolve the serial correlation problem.
Differencing the data: This involves subtracting the current observation from the previous observation, which can help remove serial correlation. If the first difference does not remove the serial correlation, you may try taking the second or higher differences until the serial correlation is removed.
Using a more general autoregressive structure: The ARDL model assumes a linear autoregressive structure. If the data exhibits more complex autocorrelation patterns, you may consider using more flexible models such as ARIMA (AutoRegressive Integrated Moving Average) or state space models.
2- Applying the first difference to the dependent variable to remove serial correlation is a common approach, and it can be an effective way to resolve the serial correlation problem. However, it is important to consider the implications of differencing the data, as it will change the interpretation of the parameters and the nature of the relationships between the variables. Before applying the first difference, it's a good idea to check the order of integration of each variable and ensure that the first difference is the appropriate order to remove the serial correlation.
  • asked a question related to Time Series
Question
3 answers
If it is assumed that the time series X affects the time series Y, is it possible to quantify the effect of X attractor, reconstructed in the phase space through Takens's theorem, on the reconstructed Y attractor?
There is Granger causality, for example, to model this causal behavior in time series. Is there any similar technique to model the causality between two attractors in the phase space ( X -> Y)?
Relevant answer
Answer
I think you need to refine what you mean by "affects" and "causality". You can certainly calculate the cross-correlation function of one or more of the variables in the two attractors, but if they reside in independent basins, there is no reason to expect their clocks to be synchronized or even to run at the same rate. A first step might be to convert an orbit on each attractor to a scalar time series using Takens' theorem and then to calculate the cross correlation function between one and a time-shifted version of the other as a function of the time shift. Such a plot would likely have structure, but it may not be reproducible or easy to interpret.
  • asked a question related to Time Series
Question
1 answer
Intervention analysis in time series refers to the analysis of how the mean level of a series changes after an intervention, when it is assumed that the same ARIMA structure for the series holds both before and after the intervention. With this, is it possible and trust the result of using a univariate data without other predictor variables?
Relevant answer
Answer
Intervention analysis involves drastic change in the mean level of a univariate time series basically. Of course yes, yes and yes.
  • asked a question related to Time Series
Question
3 answers
Hello Everyone,
In my study, I'm utilizing Eviews 10 software to analyze secondary time-series data. According to the manual, it includes inferential statistics and a reliability test. Could someone kindly clarify what tests go under inferential tests and what sort of tests I need to do to confirm the reliability of the collected data?
Thanks
Relevant answer
Answer
Inferential statistics is the most appropriate research method for analyzing data that has been gathered through observation or measurement. Inferential statistics is used to draw conclusions about a population based on the data collected
  • asked a question related to Time Series
Question
4 answers
Hello ,
I am doing drought propagation using two catchments with SPI and SSI . My question , I would to do like the graph in the picture attached. Do have any useful tutorial or maybe some suggestion R packages that is suitable to produce the same graph.
Thanks in advance
Relevant answer
  • asked a question related to Time Series
Question
4 answers
Hi all,
Does anybody know how or where I can download a multi-model mean time series of the CMIP6 climate projection scenarios a certain location? It seems I can only find output from separate models, which is a lot work to put them together.
Thanks
Relevant answer
Answer
You may find this website to be useful. First, click on Gridded data, then select ensemble and download the NetCDF file.
  • asked a question related to Time Series
Question
2 answers
My undergraduate thesis is based on time-series data and I am using ARDL model. I got some sample thesis from online websites, but some may have missed some parts while some may have added additional parts. As a result, I couldn't figure out the correct structure from this. Therefore, if anyone has an ARDL model-based thesis study, please share it. It would be helpful for my final year thesis.
Thanks in advance
Relevant answer
Answer
  • asked a question related to Time Series
Question
5 answers
Hello,
I want to decompose oil price shocks into three elements (oil supply shocks, aggregate demand shock and oil-specific demand shock) according to Kilian (see attachment). In order to execute this in a programming software like R it needs to have a matrix of the data. I do not find information on which data shall be contained in this matrix? I think there is a time series of oil prices or is there any pre-work to be executed to seperate the three elements in advance?
The theoretical code is given as: c(...) includes all the data
m <- matrix(c(....),2,2)
cm <- chol(m)
cm
t(cm) %*% cm #-- = 'm'
crossprod(cm) #-- = 'm'
Thank you.
Relevant answer
Answer
I do not understand, why one should use a Cholesky decomposition in this case. This decomposiion is (in my understanding) used for solving linear equation systems. This means, you have to specify this equation (system) before. Obviously, the equation for which the form and paramters to find-out is: oil price=f(oil supply,aggregate demand, specific deman, other factors). Under the assumption, that the relations are linear(in some transformations), one can apply the usual estimation methods, if one has suffient data. One need not care how the method solves the system, e.g how it calculates the parameters.
The problem I wanted to show in my first answer that it is not easy to fifferentiate between supply and demand. E.g. if you have the GDP for a certain period, is it supply or demand? What is a shock? I know this term is frequently used by New Classical and New Keynesian economists, but the meaning is unclear.
  • asked a question related to Time Series
Question
5 answers
I wan to understand the difference between structural shifts in the time series and regime shift in the time series. Both are time varying models.
Relevant answer
Answer
Here you will find some usefull example :
  1. The adoption of new technologies, such as the shift from analog to digital communication systems, can lead to a structural change in the way that information is transmitted and processed.
  2. A change in policy, such as an increase in the minimum wage, can lead to a structural change in the labor market, affecting the demand for and supply of certain types of workers.
  3. A demographic shift, such as an aging population, can lead to a structural change in the social and economic landscape, as the needs and preferences of different age groups may change.
Here are a few examples of regime shift:
  1. A sudden shift in the climate, such as an abrupt increase in temperature or a sudden change in precipitation patterns, can lead to a regime shift in an ecosystem, affecting the distribution and behavior of different species.
  2. A financial crisis, such as a stock market crash, can lead to a regime shift in the economy, as it can affect the availability of credit, the value of assets, and the overall level of economic activity.
  3. A political regime shift, such as a revolution or a coup, can lead to a rapid change in the way that a country is governed and the policies that are implemented
  • asked a question related to Time Series
Question
4 answers
e.g., traffic volume versus crashes per year (other than using simple correlation)
Relevant answer
Answer
Assuming the 2 series are in the same time scale (e.g. days) then I'd do a lot comparing differences (lags), correlograms are fun to look at if you know what they say.
  • asked a question related to Time Series
Question
3 answers
I want to provide a prediction model for the concentration of pollutant particles with the help of meteorological data and the concentration of pollutant particles. My output is particle concentration (regression) and my input is meteorological data. My data is time series. Which models consider the sequence of time series?Thank you for my answer.
Relevant answer
Answer
The grey modeling for the estimation of time series is one of the most successful tools in recent times. For example, you can use the exgm(1,1) model.
  • asked a question related to Time Series
Question
1 answer
I am working on univariate time series prediction problem. Found many tools available for time series forecasting. But wanted to know if....
Any machine learning models that can predict time series like ARIMA without converting into supervised data?
Please suggest.
  • asked a question related to Time Series
Question
3 answers
Dear Sir/Madam,
I am using time-series secondary data in my research. But I am not sure as to how to test validity and reliability of the data.
Relevant answer
Answer
Secondary data were collected by other persons or organizations. its reliability and validity depend upon the reputation of the person or organization. Once you are confident that the secondary source of data is authentic and reputed, it is not necessary to check its reliability and validity. In the case of primary data, there is necessary to check the reliability and validity of questionnaires or data collection tools.
  • asked a question related to Time Series
Question
3 answers
I have a predefined correlation matrix for N variable and want to generate time series for these variables satisfying their correlation. The time could be of any number of data points (i.e., 300).
I tried as follows but couldn't preserve the correlation of all N variable.
  1. generated a time series (randomly) for one of the variables and stored.
  2. manipulated the time series of first variable by adding some noise to get desired correlation value between 1st and second.
  3. once second time series reflects the predefined correlation between 1st and 2nd, then stored and generated time series for all N variables in same way satisfying the predefined correlation of all with first variable.
  4. now, moving to test the correlation of 2nd and 3rd and try to modify the times series of 3rd to maintain its predefined correlation with 2nd but its correlation with 1st is disturbed.
  5. beyond this my method doesn't work.
Any suggestion(s) or already written script(s) in MATLAB, R or Python will be much appreciated.
Relevant answer
Answer
Time series data is usually dependent on time. Pearson correlation, however, is appropriate for independent data. This problem is similar to the so called spurious regression. The coefficient is likely to be highly significant but this comes only from the time trend of the data that affects both series.
  • asked a question related to Time Series
Question
6 answers
If I have a dataset with different periods how I can deal with it as time-series
Relevant answer
Answer
I agree with the idea of Anton Rainer of a joke
  • asked a question related to Time Series
Question
3 answers
For example, we have a multivariate time series comprising 8 univariate time series. I am aware some deep learning libraries can help to predict each of the time series in the multivariate series. I want to control what to forecast (for instance, forecast the first 4 series). Is it possible to use such deep learning libraries to accomplish that or there is a better way to do it?
Thanks
Relevant answer
Answer
Thank you
Kareem Omran
for your comprehensive response. I will give those models you mentioned a try. I have been using LSTM Encoder-Decoder which seems to be impossible for me to actualise my goal.
Once again, thank you massively!
  • asked a question related to Time Series
Question
6 answers
I'd appreciate your help for:
what are the implications of finding evidence for structure breaks in the multivariate time series ?
Relevant answer
Answer
The first step is to ascertain whether the structural break is instantaneous or gradual and whether the break is single or double. Then you determine whether it is additive or innovative. There are various treatment for each set vis-a-vis single break or double breaks; additive or innovative and instantaneous or gradual.