The best I can offer is to keep trying to get to demo both at some length. :7
Demoing things like Valve's "The Lab", and the "Oculus Home" environment, may not be entirely representative, because their respective art direction are both balanced around certain overall scene lighting levels, that happen to be optimal for minimising apparent artefacts caused by the fresnel lenses; A pleasant slightly-above-medium gray-ish - low on saturation and contrast; Think watercolour, but a tiny bit darker . :7
Go, instead, into a high contrast environment; A dark place dotted with bright accent lights, such as many cockpits in Elite Dangerous, whilst out in dark space, and the "god rays" may jump out at you and smear themselves intrusively all over your vision, millimetres from your eyes.
Overall, the Rift (Consumer Version 1) chooses more pixels per degree of your field of view (EDIT3: ...for a significantly sharper image), at the natural cost of a reduction in the width of the latter. This is mitigated a bit by a secondary sacrifice in binocular overlap; You'll still see almost as far to the sides as in a Vive, but the last 5-10 degrees out to either side, can only be seen by the eye on that side, as if you had the largest nose in the world. By all accounts, most people never seem to even notice this, but to me, personally, it is very annoying.
The Rift is also a more convenient experience; It is lighter, has adjustable headphones that are mounted on its rigid frame, which makes putting it on much the same as putting on a baseball cap, without any fiddling with straps and cables and separate headphones, whatsoever, and the Home environment starts right up, automatically, just from donning it.
Its lenses argueably offer a clearer and more consistent image, where text out to the sides is still pretty damn legible, whereas with the Vive, sharpness falls off, and you can get a bit of "double vision" between the outermost frenels lens rings. A few of us, however, experience a bit of distortion, in the Rift, when looking around, which can cause some "brain strain".
The Rift will get its own motion controllers, like the Vive's wands, sometime in the autumn. They are more designed to be "part of your hand", so to speak, taking shape from a hand in resting position, so that you can almost forget you are holding anything at all. They also have analogue buttons with capacitive touch sensing, for groups of fingers, which allows them to infer different hand gestures; E.g, squeeze the buttons under all four fingers, and take your thumb off all of the spots where it may rest, and that is interpreted as a thumbs up, or down, depending on where you are pointing the device.
The Vive is brighter, but possibly may not conform to sRGB standard - this appears to result in significantly higher overall contrast ratios in most stuff, compared to the Rift. On one hand, this kind of seems to add a bit of a "realistic" feel to the imagery - on the other, it causes a lot of loss in visual detail, with things in shadow crushing toward black, and bright things blowing out. If we had a good HDR and extensive colour gamut mastering standard, maybe an adaption of BT.2020, together with colour profiling for each HMD, things could be reined in, and those extra lumens exploited... (EDIT: Such a standard can not come soon enough, as far as I am concerned; I want today's content to look as right as possible in tomorrow's hardware, regardless of manifacturer, or API, or capability.) (EDIT2: ...and I also have some strong disagreeable opinions on the act of tying drivers and hardware/metaverse framework APIs to vendor frontends. )
Despite its numerous shortcomings (...and by most accounts untypically, I experience more fresnel glare than in the Rift), so far I find myself constantly turning to my Vive first - for standing and sitting experiences both, but that's me; People have wildly differing experiences with the devices; How well they fit the shape of each individual's face, alone, and how this affects not only comfort, but more importantly the optical path, tends to be rather important. :7
Why is this the first guy to think of this? It's basic human anatomy...
Not the first to think of it, I am pretty sure, but certainly one of the few to try it, in the face of rather condescending self-proclaimed know-it-all detractors. :7
The method has a few shortcomings; It can most noteably not overcome the lack of vestibular feedback from velocity changes, but it does give you a helpful amount of sensory input from the physical treading, and is the method I favour at this time (EDIT4: ...as a complement to room-scale free movement, mind you - not a replacement); just needs tracking of the user's feet, and perfect walk cycle phase matching. :7