Once a week, MAiMATE’s AI undergoes “continuous learning” in order to adapt to the most recent market conditions. The process of continuous learning includes a feature called “EDUCATE” that allows you to evaluate your trades and reflect on your thoughts. In this article, we will discuss these two functions in detail.
- Continued learning to adapt to the most recent market.
- The Effects of Continuous Learning and EDUCATE
- Education has a gradual but steady impact on AI
Continued learning to adapt to the most recent market.
Continuous learning … the ability to re-learn all AI agents present as of 10:00 a.m. every Saturday morning to capture the success and reflection of the most recent trade results.
By performing relearning every weekend, the AI agent updates itself to match the most recent market.
AI agents just after birth are not continuously learning
In fact, the results of the AI agents immediately after their birth have not yet been subjected to continuous learning. Therefore, in order for an AI agent to continue to produce good results over the past three years, the current trading method, which is likely to reflect the most recent learning, must be a universal method that can be used over the entire past period.
Learning to create an AI agent
- The AI agent is born according to the user’s settings and learns to trade on the last three years of data.
- In this case, learning is carried out sequentially from the past to the present, so like human memory, the content learned in the past tends to fade away gradually, while the content learned in the recent past is more likely to be remembered.
- Take a trained AI agent three years in advance, let it perform trades without updating its trading methods through continuous learning, and store the results.
MAiMATE’s AI learns for three years and then goes back three years and trades as if it were trading in the market environment it first encountered. It does not adjust or fit the parameters to past market prices as is often done when creating a typical automated trading program.
It’s inevitable that AI’s past performance after birth has been poor! Why you need to continue learning.
Is there a universal and magical trading method that can keep you profitable for any period of time and in any market environment?
Let’s look at the results of the top nine AI agents (top nine in realized gains and losses over the last year) as of 12:00 pm on September 9, 2019, for the entire period.
As the above graph shows, it is difficult to keep winning with the same algorithm for the entire period of time, even for an AI agent with excellent realized gains and losses for the past year. We believe this suggests that it is quite difficult to find a universal trading method that can be used over the entire period of time, and that continuous learning is required to review the trading method from time to time.
Are AIs with poor recent performance dragged down by past successes?
Some users may say, “My AI agent is not performing as well as the most recent! Doesn’t that reflect more heavily on what you’ve learned in the last few weeks? We know that there are some people who say, “I’m not sure if I’m going to be able to do this. This is a situation where a person has had a successful experience of finding a trading method that is very applicable to a certain period of time in the past, and the successful experience is not likely to fade in the context of the most recent study. That’s very human, isn’t it?
The Effects of Continuous Learning and EDUCATE
The next step is to check the effectiveness of “Continuous Learning” and “Grow (EDUCATE)”.
The trading results for the AI agent to be verified for the period between Friday, August 23, 2019 and Friday, August 30, 2019 are as follows
You will find that you took a large loss on Friday, August 30. While I think the decision to cut your losses is inevitable, I’m scolding you for settling on August 30, based on the idea that you could have made the decision a little sooner, or you could have cut your losses as of August 28 or 29.
Experiment #1: Effectiveness of continuous learning, reduced loss
The purpose of the study is to see how “continuous learning” and “August 30th no-no’s” can change the trading results. First, we will check the effect of spontaneous continuous learning of the AI agent, ignoring “good” and “bad”. We took the AI after continuous learning, which captured the data from the above period, and brought it back again on August 23 to trade again, and the results are shown below. Please note that the slight differences in the evaluation gains and losses are due to the difficulty of simulating a perfect match up to “what time, hour, minute, and second”, and are not differences that would change the results of the analysis.
As mentioned above, we can see that the AI agent spontaneously reflects on the size of the loss, even without any particular “good” or “bad”, and mitigates the loss by accelerating the timing of the loss.
Experiment #2: Effectiveness of education, no change at one time!
Now, let’s continue to see the changes when we give a “bad” on Friday, August 30, and encourage stronger reflection by AI agents.
There has been no change. In other words, the AI agent is in a state of mind that I believe the best decision is to cut losses as of August 29th, although I was prompted to reflect strongly.
Experiment #3: Persistent education, change for the seventh time!
Still, “I want you to cut your losses on August 28! So I repeated the continuous learning for this period of time, including the “bad”, many times. As a result, the trade results after the seventh continuous learning session accelerated the docent sale by one day.
Education has a gradual but steady impact on AI
It is based on careful analysis to determine how strongly to praise the master when he gives a “good” and how strongly to scold him when he gives a “bad”. We are very aware that a single “good” or “bad” will not easily destroy the learning results that the AI agent has accumulated over time, so the results above are as expected. However, please understand that your “Good” and “Bad” will surely affect the AI agent’s behavior little by little, and that will become its personality.
Don’t force yourself to evaluate it.
If you don’t have a policy in place on which trades to praise or scold, one option is to not force yourself to educate yourself and let the AI agent learn on its own. It’s better to leave it alone than to praise and scold inconsistently.。
People with educational policies should be trained.
If you already have a clear preference for your trade, please cultivate it. For example, a policy of promoting small losses and large gains, such as praising trades that have more than 100 pips of profit, and scolding you when you lose more than 60 pips. As shown in the previous experiment, AI always changes its behavior. It is important to give consistent feedback of “likes” and “noes” over a long period of time, and it is designed so that one “like” or “noes” will not destroy the AI agent, so don’t be afraid to look back and evaluate your AI agent’s trades at least once a week. Give it to me.
Many of the common automated trading services have a fixed logic, and therefore quite few of them are able to adapt to different environments over a long period of time. The same can be said for Systre 24, which is offered by Invast Securities.
MAiMATE attempts to overcome this challenge of automated trading services with its “continuous learning” feature. And we make it smarter and more unique by incorporating the experience of its creators through EDUCATE. This feature is only possible because of MAiMATE’s use of cutting-edge technology. We publish an overall AI performance report every month, and as of April 2020, we can see that continuous learning is working very well for us so far.
I don’t think there’s a platform like it that has about 5,000 different reinforcement-learning trading AIs doing continuous learning every week. It will be very interesting for us to see what happens in the future.