Tuning the field of view
A can of worms, as I found out… :-)
Context
I am into sim-racing, and I document the stuff I do in these boring write-ups, mostly as a way to share with others what I am learning, and also to keep a record for when I have to re-do things, as one of the good aspects of this endeavour is that you are constantly updating, re-building, and re-tuning.
Some time back, I finished version 2 of my simulation rig, which was a large undertaking on my part. It required building a new monitor stand, changing motion platforms, and much more. I posted the results and I received a ton of comments about the inadequacy of my FOV. I was a bit surprised by this, so I went and re-checked, and re-adjusted it.
This write-up documents where I’ve landed, and how I got there.
An innocent post
Picture this: you’ve spent a lot of money, months worth of work, lifted heavy stuff into the second floor of your home, struggled with Windows and new electronic components for days on end, and questioned why on earth you decided to embark in this to begin with multiple times. Finally, you have it sort of working. As a matter of fact, you’ve literally *just* finished making sure the thing moves, the game runs, and the sound sounds. Full of excitement and pride, and taking advantage of the fact that your daughter is visiting from college, you ask her to take a video. Later that day, you post the video to the group. Meanwhile, you continue tweaking and tuning, of course, as that never ends.
People are overwhelmingly positive and supportive. That’s nice, of course, but I am happiest about the fact that I didn’t seem to have mis-designed this one. Cool!, you think.
Very early on I see this note.
I found it interesting, because up until that point, people had been positive, and so I thought it was an overly cautious note, so much so that I responded with this:
Little did I know how wrong I was…
To make a long story short, the majority of the comments were about the view, the FOV, the POV, and how it was incorrect. Some choice findings below,
I loved these comments and candidly laughed out loud with many of them. The following, though, takes the cake,
nice station but monitors are too big…… looks like we’re in lilliput with Gulliver
All in all, there were more than 300 comments, and at least a third were about the FOV/POV.
A clarification: while of course there were a few comments that were perhaps not in good taste, by and large everybody was supportive, and when I engaged they tried to explain their point of view, etc. So I am not complaining about the comments. On the contrary: first they made me laugh a lot (see Lilliput and Gulliver above for a taste) and second, they gave me the motivation to look into this a bit further.
Si el rio truena…
Since I was in the process of tweaking anyways, given the mountain of comments on the field of view, I decided to move some things around to see where I landed.
From the get-go, I acknowledge (and did so in the video) that the initial setup was rough, and as explained above the video was taken before I was finished. This means that there was some tuning to be done anyways, and it was all in line with the many comments I mentioned above.
Still, though, when I was done, there’s a bit of a difference of where I landed from where the commentators wanted me to land. Since it looks ok to me, I decided to write this up since perhaps I am missing something, or perhaps we are optimizing for different things.
With that in mind, let’s press on…
Yes, I know what cool looks like…
Right off the bat, it’s important to acknowledge that I know what “cool” looks like. There are pros at this, with 65in screens, that makes it look hyper-realistic. First among them, in my opinion, is BoostedMedia. If you look at their YouTube channel you can see tons of examples of him driving his 65in rig, where it looks amazingly realistic. One snip off of one of his videos is below, just for your viewing pleasure.
You can see how the non-monitor parts (dark) blend nicely with the monitor part. It’s a work of art. Also, notice how he has a mixture of virtual dash real devices. Amazing!
Here is a eye’s view of his setup in ACC, I think. Again, amazing.
Now, Will is a professional, and he uses the best cameras etc. Plus, he is a lot taller than me (keep that in mind).
Still, I can probably get close to Will (sans pro equipment for sure), as you can see below.
This is not exactly an eye’s view, as I don’t have my hands on the wheel, and my physical dash obstructs the virtual one, and the gopro has a wider lens than the snippet of Will’s, but in general, you can see how by changing a few knobs (on the button box), I can get it close.
The problem is….do I really want that view? As of this writing, my answer is “no”, and I fully understand (now) that such obvious departure from FOV canon will elicit the ire of the gods. Thus, in order to keep me safe, lets’ take it slow…
What am I trying to achieve, really?
I think it best if we level-set on what my goals are for my setup. They may differ from yours (or Will’s). In a nutshell, I want to make the simulated experience of the driver as close to what I am used to.
The reason for this is obvious. Since I am a bad sim driver, I want to leverage as much experience as possible. Thus, getting the simulator to mimic the experiences I have in real life, in my guesstimation, would help. Am I certain about this? Heck no! But then again, it’s only a hobby!
Now, what are the experiences I have in real life, you ask? Sad to report they do not include Porsches, or Audis, or Ferraris. Much less F1s. I drive regular, boring cars.
So that is what I am trying to mimic. My brain has years of timing corners, guessing openings, and calculating widths. Getting the driver’s eye view from the sim as close to what the driver’s eye view in real everyday life driving is what I want to optimize for.
Now, when you do that, it is possible (not guaranteed, but possible) that you will forego some beauty in the overall visuals. The game renders beautiful things that might not “make it into the screen” when they otherwise would.
This is in stark contrast to the approach that professionals take. As an example, they sometimes modify their physical setup so it does not obstruct the digital version (which looks better on camera). If you watch this video, for instance, you will find a number of times where Will has taken off, or modified his rig, so it looks better on camera. When I say “camera”, I do not mean just the driver’s eye camera, but also the multiple side cameras they have to show the hyper-realistic shots of themselves driving.
Keep in mind, I don’t care about that….all I care is about it looking similar to what I am used to, even if it’s not as beautiful, or with as many virtual gadgets visible on the screen.
The physical position of everything, or starting with the right foot…
So what is important when striving to get as great as a FOV/POV as you can? In my opinion, it starts with the position of everything. This involves the size of the monitors, their position, the angle of the side monitors, the height of them, and the position of the rig so that your eye is in the right spot.
Fortunately, there’s math for all of this. I went over some of this math here, but I will summarize below for completeness’ sake.
- You need to be as close to the monitors as possible (within reason, more on this later)
- When seated in “center” mode of the motion rig, your eye needs to be roughly in the center of the screen, vertically speaking
- For the angle and eye distance there are three variables: the width of your screens, the distance from the eye to the center screen, and the angle of the side monitors. Normally you have the width, and then you fix one of the other two, and calculate the resulting one, as the following picture shows.
In my case, given the size of the monitors and the room dimensions, I had to fix the angle, which meant I needed to compute the distance to the center screen.
My screens are roughly 1445mm wide, and they are mounted at a 70deg angle. Using the formula above (or one of the several web FOV calculators that you can Google) you’ll come out with a distance to the eye of 1031.8mm. Let’s round up to 1032mm :-)
So, in order for everything to work, I need to get my eye, when seated, about 1 meter away from the center monitor, right?
Taking this stuff seriously…
Just to provide some credence to the fact that I took the above calculation seriously, consider the fact that I already had my monitors happily mounted (to an NLR triple stand). However, that stand interfered with the (new) motion rig and that meant that the closest I can get my eye was about 1200mm.
170mm; 17cm; 6.69 inches.
That’s the gap between where I was, and where I needed to be.
Only 6.7 inches.
And yet, that set me on a course to do away with the triple stand and build one of my own, mounted to the wall, so I can position the eye in the right place. You can read about this, in case you want to build your own, here.
So, yes, visual fidelity is important to me, as you can see :)
Getting back to the in-game setup of the FOV
I have been tweaking things, and in the end, I was not really sure what was best. So I thought to design a “test” to find out, somewhat unbiasedly, how close the simulated view was to my personal real-life driver’s eye view.
I started by trying to find a camera that has, ***roughly speaking***, a similar “lens” as the eye, in terms of angle of view. Ended up with a Gopro, which in “superwide mode”, is close enough to what the eye sees.
Now, and this is key, while the camera “covers” roughly what my eye covers, if both are positioned next to each other, the camera is a thousand times more perfect in the quality of the peripheral images. For me, and maybe it is just me, the peripheral vision is a bit blurry, and not razor sharp and crips as the gopro images are. keep that in mind.
Then, I decided on positioning the gopro flat against my forehead, roughly in the middle. Sure, it’s a bit higher than my eye, but not much (maybe a couple of centimeters) and it’s reasonably easy to “anchor” it to a known point (my forehead) so tests are repeatable.
Finally, I decided on a simple sequence. Look forward, pan left, pan right, end. I went out to the car (the SUV was outside, my wife had taken the Ferrari to her work, sorry :) :) and took a video. Then went to the simrig, tweaked settings, and took another video.
I concatenated both but, other than that, the link below has the raw video.
Then, I did some comparisons, by matching the position of the head in each video. The following picture shows what I hope you will agree are very similar perspectives from the driver's eye.
In the picture above, all I have done is to crop them a bit, but other than that I have not resized or change the pictures in any way. The idea is that whatever the characteristics of the “eye” that filmed the real car, they need to be preserved when filming the simulated car. If I were stretching things or resizing things, it would not be fair. So, what you see above is literally the raw footing with parts taken out (top/bottom) so that I can make the comparisons align, is all.
To recap, the GoPro is in the same place, moving the same way between the two recordings. I am seated normally in each. Unfortunately, the wheel in the real-life recording is “turning left” whereas it is not in the sim rig recording, so you will have to correct that in your mind.
Notice how similar the views are. In particular, if you watch the video and you see me move my arm, you will have a sense of perspective and will see how similar they both are. Even the sizes of the rearview mirrors are similar.
And yet, when you look closely, the way I have set up AMS2 to provide this similar viewpoint is *not* the way the pros do it above. Notice how I am “closer” to the windshield. Notice how that means that I cannot fully see all gadgets in the (virtual) dashboard. Also, notice how the little bit of the wheel you can see is big. Of course, I turn wheel/hands off as I have my real/physical wheel and my own hands to watch and don’t need a duplicate. But, in the spirit of transparency, if I *were* to show the wheel, it would look BIGGER than the real one. This is expected, as I’ve gotten closer to the windshield, and thus the closer elements (e.g. wheel) grow substantially in size. Also, and this is true, the fact that the TVs are big plays a part here I think.
Still, it is a moot point (for me), because in the end, I don’t show the wheel or hands (I have my own) and I don’t use the virtual dashboard either, as I have a physical one. Thus, the relatively few “drawbacks” of being closer are negated by not showing these items…and only the good stuff of the setup remains…
Speaking of which, the physical wheel is where it’s supposed to be, and after it, the top of the dashboard starts, just as it does in real life (again, follow my hand as I pan left/right on both videos). I see roughly the same amount of everything in the Kia as I do in the Porsche, and the sizes of the things that matter (to me): windshield, mirrors, are roughly proportional to real life (by comparing to my arm/hand).
Where’s the catch?
This brings me to the conclusion, and an ask. Where’s the catch? I have not been able to find what it is. I have tried the “pros” approach to this, and in the end, the video above is the closest I have been able to replicate the looks of real life, which was what I set out to optimize for.
Hope this is informative and if you have any suggestions feel free to email me. Thanks.