You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How can I build my confidence that my strategy returns better?
for example, I have to return sequence. R1 is from my strategy, R2 is from benchmark
R1: [r_11, r_12, r_13, r_14 .... r_1n]
R2:[r_21, r_22,r_23, r_24 ... r_2n]
I have read a good paper:Robust performance hypothesis testing with the Sharpe ratio, The paper describes a method to test the sharpe ratio difference between two return series. In my opinion this method makes sense. So I want to implement this. before this, I hope to get your opinions about this:
1 Did you encounter this problem?
2 Are there other better ways to evaluate a strategy?
3 Any other your thoughts?
Looking forward to your reply! thank you
The text was updated successfully, but these errors were encountered:
The paper describes a blocked bootstrap method with studentized test statistics. It can deal with long-tail distribution and time series characteristics of the return very well. I studied for a long time, Hoping to implement it with python
How can I build my confidence that my strategy returns better?
for example, I have to return sequence. R1 is from my strategy, R2 is from benchmark
R1: [r_11, r_12, r_13, r_14 .... r_1n]
R2:[r_21, r_22,r_23, r_24 ... r_2n]
I have read a good paper:Robust performance hypothesis testing with the Sharpe ratio, The paper describes a method to test the sharpe ratio difference between two return series. In my opinion this method makes sense. So I want to implement this. before this, I hope to get your opinions about this:
1 Did you encounter this problem?
2 Are there other better ways to evaluate a strategy?
3 Any other your thoughts?
Looking forward to your reply! thank you
The text was updated successfully, but these errors were encountered: