I guess I will pose it with this analogy. Let's say you have logo you want to use in print, but the file supplied to you was only 150x150 pixels and you need to put it on a banner that's 1500x1500. You can rescale it to 1500x1500 and get a file that is a hundred times bigger (in physical size and memory), but ultimately you are not able to add in data that wasn't there in the first place, so it will obviously look pixelated...
So what is it about the transcoding process that makes it better to grade? I agree prores plays nicer with most NLEs than the dumb H264's from the camera, but adding 415 mbps of bitrate for purposes of making the shot better or more flexible to grade is akin to blowing up a low res logo and expecting it to be sharp, logically speaking.
Sorry I don't mean to be a **** I just think it's a dumb marketing ploy by DJI..
Yes and no. You cant increase quality through transcoding. "Quality" is subjective, so lets call it "information" for the sake of conversation. Logs are not marketing ploys. All pro cameras have their own log: Alexa, RED, Sony, Canon, and some have a couple.
Essentially there are three types of codecs: capture codecs, edit codecs and distribution codecs. they have different purposes. A capture codec needs to write as much information as possible in a very small amount of time. The camera chip sees changes of light as a straight line (the video look). The eye, however, sees an S curve as light changes (filmic look). Applying that curve (plus the subjectivity of the curve) are processor and time intensive. Instead of processing the information during capture to create the curves, the files are recorded with a set of instructions, D-log in DJI's case, that will later be applied to the 'straight line" information of the sensor's capture.
Edit CODECS are designed to do just that, what Steve said, create a more suitable and larger container for the video file. No matter how well you shoot, the footage out of the camera will need to be converted to an edit friendly codec that isn't constantly reading several instruction files to playback. Apple Prores, in this example, has applied the LOG to every single pixel of every single frame, making the file much larger than the original capture file, while giving the video file enough "room" to scale, grade, apply effects or simply convert.
Especially in aerial cinematography, you'll be dealing with very wide dynamic range between sky and ground. Every time you change the direction of drone, that shot will likely not match the previous shot. Grading is essential, if matching shots is important (in other words, if you're getting paid to create professional footage,m they better match). Even if you shoot perfect, sooner or later that original camera file will need to be downconverted to a playback file, risking the possibility of skewing and compressing your perfect colors.
H264 isnt crappy, its an excellent playback codec, far better than any that has been written so far. Its only crappy when a camera outputs its capture information through H.264. The camera is asked to capture 24 fps at 4k resolution, then compress the instructions into a ready-to-read file AND THEN write all that to the card really fast. To facilitate this, the quality (how much information can be captured) takes a hit, making H.264 a "crappy" capture codec. Why do some camera manufactures write to H.264? Because consumers want to take the camera out of the box and start shooting to their hearts delight. Cinematographers have the responsibility of strategizing, both aesthetically and technically, translating the story from script, to the real world lights to camera information to a suitable distribution format. An amature doesnt have these burdens and therefore a log is a completely useless thing for your average consumer.