FPV Diversity Module Shootout:
LaForge v4
VS
TrueD v3.6
VS
Pro58 with AchilleΣ Plus firmware
VS
RX5808 with AchilleΣ v2 firmware
"THE BEST" diversity receiver module for your FPV Goggles is a term that is very difficult to quantify and to objectively award to a product.
In this series of FPV VRx module testing, I decided to reduce the variables to a minimum and analyze DVR footage like it has not been done before - you can find below a more detailed description of the test rig and methodology.
TEST RIG:
The test rig is hand-made, using mostly wooden and plastic surfaces, mounts and screws. All the wires are the exact same type and length, with the power wires routed separately from the twisted pairs of video wires. The modules are equally spaced and held firmly into place. The omni antennas are all Lumenier AXII, attached via 90 degree adapters and mounted vertically, upright. The patch antennas are all Menace Invader, attached via 45 degree adapters and facing the same way. Also, the exact same DVR units, BECs and L/C filters have been used and the whole rig is powered from a single 4S LiPo battery.
All the modules are on stock settings and were calibrated properly, using a TBS Unify at 25mW, on 5800MHz.
METHODOLOGY:
This test series is split in 3 parts.
- The 1st part is the "Antenna Swap" test, where each set of omni/patch antennas was moved to the next VRx module, after each test run. The test was conducted in an outdoor area, with a quad using a TBS Unify VTx at 25mW, on F4 (5800MHz) frequency.
This 1st part is not only meant to identify the performance of each module, but to also confirm that none of the antenna sets used for these tests have any kind of tendency, advantage or disadvantage, compared to the other sets. For this reason the same configuration was used in Parts 2 and 3. - The 2nd part is the "Outdoor Track" test. The test was conducted in an outdoor area, with various quads, using a TBS Unify or ImmersionRC Tramp v2 VTx at 200mW, with whip antennas most of the time.
- The 3rd part is the "Indoor Track" test. The test was conducted in an indoor area, an underground parking lot, using quads with CP antennas, flying at 200mW.
Thousands of frames have been analyzed and compared via software and then cross-checked manually. Noise and blur data have been collected, organized and "translated" into an easier-to-understand format, in the graphs at the end of the test runs.
PART 1 - Antenna Swap test
NOTES FOR TEST RUN #3:
I was wondering why the Pro58 was struggling in some parts of this flight. After reviewing the footage over and over again, it turns out that the quad was on the far right of the test rig, out of sight of the patch antenna and with the omni of the Pro58 being obscured by all the omni antennas of the other 3 modules and the DVR units. Next to the Pro58 was the True-D, then the RX5808 and last, unobscured, the LaForge.
Number of frames per "quality range" for each module and test run:
(the number next to each module is the antenna set used, from 1 to 4, swapped around from one test to the other)
Average allocation of noise (overall results of all test runs):
PART 2 - Outdoor Track test
NOTES FOR PART 2:
The interference shown in the Bonus Footage was not planned. The test quad was landed and I kept recording in order to compare how much each module could "resist" to the video signal that was transmitted from further away, on the same frequency. The footage cannot be analyzed the same way because the software can only provide sufficient data to measure video quality / breakup - not change of scene.
For those wondering why the Pro58 performed badly in almost every situation, during Part 2 of this test series:
What the data from all the tests "tells" me is that the Pro58’s strong suit is tricky situations with a lot of break-up. This is evident in the results of Part 1 and in the data from Part 3 (indoor track with CP antennas on the quads).
There were very few instances of this kind during the test runs of Part 2 and with such a small sample I cannot bring you conclusions based on data. This is why the noise levels in the comparison charts are much lower than Part 1, essentially nit-picking even the smallest of breakups, discolorations and other artifacts in the video feed (which are already exaggerated due to the DVR recording and would often be unnoticeable in the goggles). In this scenario the Pro58 performed worst of all the modules and this is clearly evident just by watching the footage.
Part 3 was conducted in an "RF mayhem" environment: an indoor parking lot full of cement columns and reflective surfaces.
Number of frames per "quality range" for each module and test run:
Average allocation of noise (overall results of all test runs):
PART 3 - Indoor Track test
NOTES FOR PART 3:
The test was conducted in an indoor, underground parking lot, full of cement surfaces, with various quads. Most of the time there were at least 2 quads powered on. In this scenario, multipathing and reflection of signals definitely had a party - and that was the whole purpose of this test. The distances are fairly short but the signals are bouncing all over the place until they land on the antennas of the modules.
The settings on all the modules are stock settings. I am aware that lowering the "switching speed" would probably produce different results but for the sake of consistency everything remained exactly the same in all the 3 parts of this test. Changing the settings of the modules can introduce variables that have unlimited combinations, all of which cannot be tested. It is something I am considering doing, though, in the upcoming tests, following advice from the manufacturers specifically for each scenario.
Keep in mind that the results of one run are not meant to be directly compared to another run. Using different quads with different setups creates different "baselines" for the analysis of each test run. This is why in order to summarize I do not simply add all the noisy frames of each module but calculate how the noise is allocated (%) among the modules.
Number of frames per "quality range" for each module and test run:
Average allocation of noise (overall results of all test runs):
RELATED LINKS:
How to do the OSD mod and flash AchilleΣ Plus on the Pro58 receiver module
How to flash AchilleΣ v2 on the RX5808 Pro and the RX5808 Pro Plus receiver modules
Fatshark HD3 Core FPV goggles power button and fan mods (feat. FuriousFPV True-D v3.6)
How to install UBAD LaForge V4 on Fatshark HD3 Core FPV goggles:
CREDITS: Huge thanks to
KokoblokoFPV for his mad piloting and his patience: https://goo.gl/gxK4Z9
Hoonigan for his LaForge v4: https://goo.gl/No5Szz
TonyFPV for his True-D v3.6 and Menace Invader: https://goo.gl/zXNBYv
Hobbytrip.gr for the AXII antennas: https://goo.gl/KxzUJ4
TsirosFPV for his Menace Invaders: https://goo.gl/rjLM3q
Lee13 for his Menace Invaders: https://goo.gl/b6PQsF
PRODUCT LINKS:
If you like my content and want to support the channel, you can use the affiliate links below. I get a small kickback from some of these stores if you decide to buy something and it costs nothing extra to you.
Thank you in advance!
UBAD LaForge v4:
https://goo.gl/WsG2Li (UBAD)
https://goo.gl/ofDA6S (Ebay)
https://goo.gl/86vttb (Drone-FPV-Racer)
https://goo.gl/tGGoA9 (GetFPV)
Eachine Pro58:
https://goo.gl/fFKU2n (Banggood)
https://goo.gl/YUDPzc (Ebay)
https://goo.gl/3WFNQ7 (Banggood)
FuriousFPV True-D v3.6:
https://goo.gl/fNNeXA (FuriousFPV)
https://goo.gl/Dy1rge (Banggood)
https://goo.gl/NfywvJ (Banggood)
https://goo.gl/JwWgCw (Ebay)
https://goo.gl/WRUrpo (HobbyKing)
Realacc RX5808 Pro Plus:
https://goo.gl/YDSFQm (Banggood)
https://goo.gl/xB4YzB (Ebay)
https://goo.gl/xRo27X (Banggood)
AchilleΣ firmware website:
You can also tip my 'piggy bank' via PAYPAL, here: https://goo.gl/gqrYuh
or click / save these convenient BOOKMARKS in your browser and use them to visit the websites, before purchasing new goodies:
- BANGGOOD (opens the RC Toys & Hobbies category, showing the Newest products): https://goo.gl/R4TceG
- GEARBEST (opens the Toys & Hobbies R/C category, showing the Newest products): https://goo.gl/wwTJwt
- EBAY (the shortlink has to be bookmarked): https://goo.gl/gFRdy5
- FPVMODEL (newest products): https://goo.gl/zmPxbf
- HORUSRC (main page): https://goo.gl/n2fJDq
A huge THANK YOU in advance for your support!
Enjoy !
Log In to reply
If you are into FPV flying and are using goggles with a video receiver module, it should be clear enough. If not, then the article is not for you ;)
Did you also watch the videos of the article?
Log In to reply
It would be easier (for me) to just walk away, but I am trying to give you useful feedback. I have no interest in bashing anyone. I am an Electronics Engineer, and have 3 FPV quads, 1 FPV airplane, and 3 different value level goggles. I am not claiming to be an expert, or even at your level, but if I can’t get value from your article, then I doubt many can.
I wrote a long detailed explanation, but it comes down to this. The data you generated does not seem to be a useful measure of the perceived video quality. I would not attempt to say which module is best or under what conditions.
The one conclusion I can draw from all of this, and this is no reflection on you or your article, is that the FPV video quality we put up with is total crap. The fact that great pilots can do what they do with this video input is amazing, and a testament to the human visual processing system. If I had to drive to work with this level of quality through my windshield I would be scared to death.
Log In to reply
Concerning the shootout, it is a very complicated and hard to tackle subject, made simple: I record footage from 4 modules simultaneously, analyze it and see which module produces the less severe breakup, in various different scenarios. The complicated part is to reduce the variables as much as possible and to do it objectively and accurately. Every part of the test rig and the test itself have been extensively thought out in order to achieve that.
I started with an antenna test, to make sure there is no tendency from either pair of omni/patch antennas that could affect the results in the next 2 scenarios. I have included the graphs of the video "noise" for each test run next to each module, you can see spikes that pass by unnoticed but if you review the video at a slower speed you will see the breakup.
The data is there to show which module performs best in each use case. Perceived video quality leans towards subjective ranking of the performance of the modules and depends on the user.
I thank you for your feedback but you are the first one telling me that had no idea what it is about, after reading it / watching the videos. If there are specific gaps or parts I have not explained well, please let me know and I will be happy to elaborate.
Log In to reply
1. You need to swap the UUT (Units under test) to different locations in the fixture. Keep the antennas where they are, but swap the receivers around. If you are already doing that great, but you keep talking about swapping the antennas, and that does not equalize things based on their position in the fixture. You already know that certain positions in the fixture are advantaged or disadvantaged depending on where the quad is relative to the fixture.
2. You need to fly the quad in a fixed pattern relative to the fixture. Since you already know that the quads position relative to the fixture is important, you need to minimize that variable as much as possible, otherwise it swamps out the data you are looking for. Even averaging the data across 4 runs with the receivers in each of the 4 different fixture locations doesn’t average this out unless the quad runs the same exact pattern for all 4 runs.
3. You need to reduce the data down to a figure of merit so modules can be compared directly. I am thinking of something like, 1) set a lower threshold, 2) square the noise value above the threshold, 3) integrate the squared noise values above the threshold, 4) reset the integral to zero when the noise drops below the threshold, 5) sum the results over time. The general idea is that 1) the pilots loss of situational awareness is non-linear with noise, below a certain level it doesn’t bother them much, and from there on up it gets worse in a non-linear manner. 2) The longer the pilots vision is compromised, the worse it gets, so integrating over time penalizes longer periods of noise more than short intermittent periods. All of this is an attempt to translate objective data into the pilots subjective experience so it is very difficult to calibrate the algorithm, but there are many examples of signal weighting like this is used for analyzing the human experience of audio, video, light levels, color sensitivity, etc.
Log In to reply
1. The only case were this could make a very slight difference is in the indoor track test (underground parking lot). The quads are flying in front of the fixture so the antennas are not obscured - with run #3 of Part 1 being the only exception just for a few seconds, noticed it while editing and processing the data. The distance the quads covered is large enough for me to consider the few centimeters of positional difference between the modules negligible. If this is to be done, then also the height should be compared, as well as different distances between the modules, maybe different materials on the test rig etc. It is a pandora's box that does not have to be opened. The antennas were the biggest variable that needed to be ruled out - this is why everything remained the same and only the antennas were swapped around, on Part 1.
2. It is impossible to do this. If we are talking about consistency, I should go to a train museum, find a large scale railroad, strap a VTx and camera at the front of the train and do the test there. A fixed pattern cannot be achieved with a flying quad. It is also not needed, since I compare the performance of the modules separately for each test run. The averages you see in the graphs represent the average allocation of noise, not the average number of noisy frames. The quad's position relative to the fixture becomes less and less important as the distance increases. Also the tests were done during actual flying conditions, not with quads hovering slowly around, like others have done. If position X on the track is advantageous for one module, it is safe to say that a position just a few millimeters from position X may be advantageous for another module. Going back to the train example, if that was the test scenario, the modules would be ofcourse swapped around.
By the way, I had also ruled out any differences in video feed there might be due to the hardware used in the test rig. I had repeatedly tested all the combinations and compared the footage from a static camera in stable lighting conditions to make sure that none of the slots result in different video feed in the DVR.
3. I agree, that is a good idea. The tests I ran in my 3 scenarios did not result in enough instances of prolonged noisy / unusable video feed. It still involves a subjective metric though (the minimum number of consecutive frames of video breakup that is considered as not acceptable). I intend to add a "Bando Freestyle" scenario in the next shootout and hope for an even more challenging situation - I have noted your suggestion and will try to implement it.
Log In to reply
Log In to reply
I assure you that I have put a lot of effort and hard work into this shootout. I don't know if it looks like it was an easy task but it was not - not at all.
I mentioned that the Pro58 was struggling, I also mentioned that it was just for a few seconds (only in run #3 of Part 1). This was the only time the quad flew on the far right of the test rig, for a few seconds, out of a 2,5 minute flight. If you think that this is a bias and that it kills the credibility of my tests as a whole, I cannot convince you otherwise.
I assume that by camera you mean VRx module. Using 4 Pro58 modules to test the 4 sets of antennas introduces another variable: how can you be sure that all the Pro58 modules perform exactly the same? So there is no point of doing that and instead I did the antenna swap test using the 4 different modules of the shootout. You can check the section of the Part 1 video that refers to the performance of each antenna set on each of the modules.
The quad flies in a large area and covers a very big distance, compared to the distance that separates each module on the test rig. It is also moving around all the time, it is not standing still at one position. As I already explained in the other comment: "If position X on the track is advantageous for one module, it is safe to say that a position just a few millimeters from position X may be advantageous for another module".
Normalizing the way you describe requires that the exact same flight is performed, which is impossible to do. If you are up to the task of "really performing a comparative experiment" the way you describe, I would be happy to see your results. Until then, I am sorry but I cannot accept lightly that my test has lost its credibility because of the reason you describe.
Log In to reply
1. For the video shoot just fly the drone(s) directly in front of the camera hovering about 10 feet away and
5 feet high (height of cameras?). This would not be the realistic dynamic data that you would already
processed but would be good enough to tell how much sensitivity is due to position at low dynamics of
the flying drone.
2. Position Video 1 as follows:
1 2 3 4 (video shoot #1)
2 1 3 4 (video shoot #2)
4 2 1 3 (video shoot #3)
3 4 2 1 (video shoot #4)
4. Run your data analysis as you did before but determine the video sensitivity of #1 as a function of
position. Use a statistics in your analysis and put in 1-sigma error bars for your results.
5. You can see that I designed a simple matrix in step #3 with the questionable camera on the diagonal.
Then you can get the sensitivity of the other cameras, 2 3 and 4 by positioning these on the diagonal.
For example for camera 2 it would be:
2 1 3 4
4 2 1 3
3 4 2 1
1 3 4 2
For further information concerning your experiment please read the book entitled "Design of Experiments" by William Diamond. This is on Amazon.com at website location,
https://www.amazon.com/s/ref=nb_sb_noss?url=search-alias%3Daps&field-keywords=design+of+experiments+by+william+diamond
Thanks for listening to my comments and again I apologize for being too harsh in my previous note.
I know your experiment took a lot of time and patience for your analysis. However, a little more science
undertaking and analysis will make your experimental results better.
Regards, Bert
Log In to reply
I need a few clarifications so that I make sure there is no misunderstanding:
- You refer to "cameras" but I understand that you mean the receiver modules I am comparing (I mention it in case others follow our discussion, to avoid any confusion).
- Concerning steps 1 and 2, do you mean that "video shoot #1" is the set of videos shot in step 1 with the modules laid out as 1 2 3 4 on the test rig, then that "video shoot #2" is another set of videos shot with the modules as 2 1 3 4 on the test rig, "video shoot #3" another set of videos as 4 2 1 3 and finally "video shoot #4" yet another set as 3 4 2 1 - meaning that 4 flights will be done, one for each arrangement of the modules? Is this correct?
- Also in step 5 you mentioned "step #3" but it is not described anywhere - please clarify if you mean step #2 instead.
Log In to reply
Q#2 Yes, Different videos shot with new Rx's positions. 4 different flights to be done but remember this is with the Drone hovering at a preset distance and height. This is an attempt to keep the video of the drones to a constant level so that the saved video data would be somewhat but not exactly identical from shoot to shoot. This is only done to determine the effect of one of the Rx's being first in line, then second in line, then 3rd in line and finally last in line. Do this for all 4 Rx's and you'll have some data that determines the sensitivity of any of the Rx's to be 1st, 2nd, 3rd or last. If their all the same then the data should show it. If one of them is super-sensitive to position, then again the data would show it. Then you can clearly show that you've done some work in determining this.
Q#3 Step #3 is mislabeled and not missing It is Step#2. It's hard to write something on this scrolling input and see it in one large area.
Also, one other comment. In your last graph, "Average allocation of noise (overall results of all test runs" it shows that all Rx's are very close to one another in the bar graph. If you use statistics for each of the bars and calculate the +/- 1-sigma error fluctuations in the data, then replot the bar data with the 1-sigma errors as a line on top of each bar you will probably get them all fairly close to one another. A theorem in statistics states that if the 1-sigma errors are overlapping then you can't tell the difference between any of them. The conclusion then is that all Rx's are alike and you can't tell the difference between any of them. You didn't provide a conclusion so I can't tell what the 1-sigma errors are, only surmise. What is your conclusion?
Log In to reply
I find it wrong to calculate the video sensitivity just by hovering at a specific height and distance, when the area the flight takes place is so big. Shouldn't this be repeated in various different places within the "test site", at different distances and at different heights?
Also, describing the modules as being 1st 2nd etc "in line", sounds like the test is performed with one module being behind the other, which is not the case. The face of the test rig faces the area in which the flight is happening.
Just for an indication of "scale":
The wooden board of the test rig is approx 65cm (0,65m) long. The area in which the quad travels, in Part 1 for example, is approx 4500sqm, in front of the test rig (without any of the antennas of one module obscuring the antenna of any of the other modules next to it).
I consider the few cm that separate each module / antenna set, in relation to the vast area in which the test quad is dynamically and constantly flying in, completely negligible. But I would be happy to perform the test you have described, only if you can suggest a way of performing 4 flights that are exactly identical to each other. And no, I do not consider an autonomous flight with a large drone an option for doing that.
The last graph you mention "Average allocation of noise (overall results of all test runs)" is only for Part 3. There are 2 more graphs like this, one for each of the other parts (1 & 2).
If you check the graph of Part 1, which was the most demanding test with the most breakup, you will see that the performance of the VRx modules is far from "very close".
The performance of the modules is clearly shown in the DVR footage. The graphs have been manually checked to confirm that they reflect what is actually happening - and they do.
Each person can draw his/her own conclusion, based on the scenario he/she mostly flies. The overall conclusion for me is that the LaForge and the RX5808 are more consistent in all scenarios, with the LaForge performing consistently better of the two. The Pro58's strong suit is situations with significant breakup - in which it rules them all. The TrueD falls behind and delivers the worst experience out of these 4 modules.
(If you are on a PC you can grab the bottom right corner and increase the size of the text box you are writing the reply into)
Log In to reply