<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	xmlns:media="http://search.yahoo.com/mrss/"
>

<channel>
	<title>Ideas &#8211; Wade Tregaskis</title>
	<atom:link href="https://wadetregaskis.com/categories/ideas/feed/" rel="self" type="application/rss+xml" />
	<link>https://wadetregaskis.com</link>
	<description></description>
	<lastBuildDate>Tue, 02 Jan 2024 04:16:52 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	

 
<site xmlns="com-wordpress:feed-additions:1">226351702</site>	<item>
		<title>Z9 II wishlist</title>
		<link>https://wadetregaskis.com/z9-ii-wishlist/</link>
					<comments>https://wadetregaskis.com/z9-ii-wishlist/#respond</comments>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Sat, 18 Nov 2023 01:33:36 +0000</pubDate>
				<category><![CDATA[Ideas]]></category>
		<category><![CDATA[Photography]]></category>
		<category><![CDATA[autofocus]]></category>
		<category><![CDATA[Nikon]]></category>
		<category><![CDATA[wishlist]]></category>
		<guid isPermaLink="false">https://blog.wadetregaskis.com/?p=5022</guid>

					<description><![CDATA[Note: I originally wrote this in early 2022, after a few months with the Z9, but I forgot to actually publish it! I realised this in November 2023, so I corrected that oversight after a quick update (e.g. I originally had a wishlist item for a &#8220;portrait-grip-less Z9 without any other changes&#8221;, which is basically&#8230; <a class="read-more-link" href="https://wadetregaskis.com/z9-ii-wishlist/" data-wpel-link="internal">Read more</a>]]></description>
										<content:encoded><![CDATA[
<div class="wp-block-group"><div class="wp-block-group__inner-container is-layout-constrained wp-block-group-is-layout-constrained">
<p>Note:  I originally wrote this in early 2022, after a few months with the Z9, but I forgot to actually publish it!  I realised this in November 2023, so I corrected that oversight after a quick update (e.g. I originally had a wishlist item for a &#8220;portrait-grip-less Z9 without any other changes&#8221;, which is basically the Z8 we did in fact get!).</p>
</div></div>



<p>What follows is a list of things I wish the Z9 had / could do better.  I believe these are actually viable &#8211; I&#8217;m avoiding the common but perhaps unrealistic items like massively improve dynamic range or noise performance or whatever.</p>



<h2 class="wp-block-heading" id="autofocus">Autofocus</h2>



<h3 class="wp-block-heading" id="better-low-light-autofocus">Better low-light autofocus</h3>



<p>The Z9&#8217;s not <em>bad</em>, but it could be better &#8211; all cameras could &#8211; and in particular I&#8217;d love to see some of the caveats eliminated (like having to compromise between accurate exposure previews and autofocus performance).</p>



<h3 class="wp-block-heading">Better red light autofocus</h3>



<p>Purportedly (per chatter on the interwebs) mirrorless cameras typically only use green and/or blue sensels for autofocus, not red.  I&#8217;m not sure how accurate that is &#8211; it&#8217;s a strange choice on the face of it, and at least partly false since you <em>can</em> focus on a purely red object &#8211; but it <em>does</em> partially track with the actual behaviour of the Z9 (and the Z7 before it), which is to really struggle to autofocus under predominately-red light or with purely red subjects.</p>



<p>This is particularly a problem underwater, and of course in conjunction with many low-light focus aids such as on some Speedlights and strobes.</p>



<p>For my typical subjects and subject matter it&#8217;s not a big deal, although in a way that just makes it even more prominent when I am in that situation.</p>



<h3 class="wp-block-heading" id="better-subject-recognition">Better subject recognition</h3>



<p>This is a broad area, but any improvement in any direction would be good.  Things like:</p>



<ul class="wp-block-list">
<li>Recognition of a wider range of subjects (particularly wildlife).</li>



<li>More reliable detection of eyes (as opposed to e.g. ears &amp; nostrils).</li>



<li>&#8220;Iris&#8221; detection or whatever you want to call it &#8211; the ability to focus specifically on the iris rather than e.g. eyelashes.</li>



<li>Better recognition of subject&#8217;s heads when they&#8217;re <em>not</em> closer to the camera than any other part of the subject.<br><br>All too often the animal is in a profile view, or even facing away from me but with their head / face / eyes still in view, and the Z9 <em>very</em> often loses the face and reverts to &#8220;centre of mass&#8221;, which is usually the animal&#8217;s side, or butt.  The Z9 really needs to fixate on the head / face / eyes if those are anywhere in view, irrespective of their position relative to the body.</li>
</ul>



<h3 class="wp-block-heading" id="evf-eye-tracking">EVF eye tracking</h3>



<p>I haven&#8217;t used the Canon R3 &#8211; and the reviews of its eye tracking are mixed, indicating it&#8217;s not quite there yet technically &#8211; but it&#8217;s clearly going in the right direction with its EVF eye tracking.  This is clearly the superior way of selecting your subject / focus point placement.</p>



<h3 class="wp-block-heading">Less fixation on detected subjects in 3D Tracking</h3>



<p>If there&#8217;s a subject detected <em>anywhere</em> in the frame, the Z9 will <em>always</em> focus on it in 3D tracking mode, no matter where the focus point is.  This is incredibly frustrating and hostile behaviour, especially while subject detection has so many false positives and doesn&#8217;t reliably prioritise the right part of the subject (e.g. ignoring the head in favour of the butt).</p>



<p>Instead, it should lock onto the detected subject <em>only</em> if I actually put the focus point over the subject detection box and then engage it.  Otherwise, it should ignore detected subjects and focus on what I told it to.</p>



<p>It&#8217;s permissible if there&#8217;s leeway here, to allow for imperfect positioning of the focus point vs the subject, such as with rapidly-moving subjects.  This could be something that&#8217;s configurable, to suit people&#8217;s differing tastes and needs for how &#8220;generous&#8221; the camera should be regarding precise placement.</p>



<h3 class="wp-block-heading">No trade-off with correct exposure preview</h3>



<p>All Nikon Z cameras to date &#8211; perhaps all mirrorless cameras? &#8211; force an unfortunate trade-off between autofocus performance and accurate exposure previews.  I believe this is largely a false dichotomy.</p>



<p>The autofocus sensels are on the image sensor (as opposed to a completely separate sensor as most DSLRs) and their gain setting (ISO) seems to be tied to a sensor-wide value.  Their performance relies on having a strong signal (i.e. enough light).  Thus it&#8217;s important that the gain be as high as possible (without clipping).  But that might not be what you want for the final exposure &#8211; perhaps you&#8217;re trying to preserve brighter tones elsewhere in the frame, for example.  Thus your autofocus system might not be getting as much light as it&#8217;d like, and it performs poorly as a consequence.</p>



<p>The Z9 allows you to either see an accurate exposure preview &#8211; at the expense of poorer AF performance if your subject isn&#8217;t very bright &#8211; or inaccurate exposure (similar to the optical viewfinder experience).</p>



<p>I believe it could do the best of both at only minor inconvenience to dynamic range accuracy &#8211; it can adjust the sensor&#8217;s ISO to suit the autofocus system, then digitally scale the exposure in the EVF to represent your exposure settings.  This does potentially mean crushing the blacks or blowing the highlights in the EVF&#8217;s preview (no such issues with the actual photos) but that&#8217;s a minor inconvenience in comparison to the alternatives.</p>



<p>Making the &#8216;strength&#8217; of this tuneable could also help suit every individual&#8217;s preferences (e.g. allow up to N stops of such internal adjustment).</p>



<p>Note that it could also in theory adjust the autofocus sensels independently to the imaging sensels used for the EVF / LCD image, and that would of course be the optimal solution.  I&#8217;m just not sure how viable that is for technical reasons.  I also suspect that as autofocus systems continue to evolve into scene- and subject-analysis systems, they&#8217;ll need essentially the entire image anyway.</p>



<h3 class="wp-block-heading">Same autofocus in video mode as stills</h3>



<p>This applies broadly &#8211; right now in video mode you have more limited options (e.g. no 3D tracking, only the less reliable &#8220;subject tracking&#8221;), you can&#8217;t use custom buttons <em>at all</em> for customised autofocus engagement, and you also have a <em>way</em> less performant autofocus system in general.</p>



<p>It&#8217;s baffling that there are these differences.  The limitations on button configuration are just arbitrary.  And I don&#8217;t know what camera resources they&#8217;re overloading between autofocus function &amp; video recording, that preclude them both being used simultaneously, but they should stop it.  Add more dedicated hardware.  Do whatever it takes to make autofocus work identically whether you&#8217;re doing stills or video.</p>



<p>It&#8217;s clear Nikon pushed harder than ever to make the Z9 a good video camera, so it&#8217;s baffling why they didn&#8217;t address these flaws along with the boost to recording resolutions, bitrates, and formats.</p>



<p>To elaborate, autofocus in video mode on the Z9 is disappointing.  It doesn&#8217;t work correctly a lot of the time &#8211; outright refusing to focus, or focusing stubbornly on the background no matter what you or your subject do, or just simply missing acceptable focus.  Switch to stills mode and autofocus often works perfectly, in comparison.  In fact it&#8217;s such a dramatic disparity that I sometimes switch to stills mode temporarily just to autofocus.  Yes, it&#8217;s very frustrating and I miss critical moments, but the alternative is all-too-often that I can&#8217;t get anything in focus at all.</p>



<p>Manual focus should of course not be the &#8216;workaround&#8217;, but even aside from the principle of that, it&#8217;s just not possible to <em>accurately</em> manually focus while recording video when you have 8k video (~33 megapixels) in a 1.2-megapixel viewfinder.  Even in 4k (~8 megapixels) it&#8217;s very challenging.  Let-alone whether you&#8217;re skilled enough to track a moving subject anyway.</p>



<h2 class="wp-block-heading">Camera modes</h2>



<h3 class="wp-block-heading">Motion-aware aperture priority</h3>



<p>The camera should be able to set the shutter speed automatically based on actual subject &amp; camera movement.  e.g. if I&#8217;m photographing a bird that&#8217;s perched, essentially immobile, in limited light, the camera should automatically drop the shutter speed in order to lower the ISO and thus minimise noise.  If the bird suddenly starts moving, it should instantly raise the shutter speed to whatever is necessary to freeze the bird&#8217;s motion.</p>



<p>In all of this it should understand what shutter speeds are viable given the degree of perceived movement involved &#8211; factoring in focal length and recent image stabilisation performance &#8211; and including the recent history of camera movement so that it adapts to different users and situations (e.g. buffeting winds, being on a moving platform, etc).</p>



<p>Some cameras &#8211; like GoPros &#8211; already do a limited variant of this whereby they end an exposure early when they detect significant camera movement.  Especially in video mode where you can benefit from inter-frame noise reduction, this is what helps make GoPro footage look exceptionally-well stabilised while remaining surprisingly consistent in exposure and noise levels.</p>



<p>The degree of &#8216;freezing&#8217; could be configurable along two dimensions:</p>



<ul class="wp-block-list">
<li>Strength.  Different folks have different tolerances for blur, so being able to trade-off between pixel-perfect sharpness and noise is important.</li>



<li>Subject-only vs whole scene.  Maybe you want to freeze your subject but don&#8217;t care about the background, such as when panning for a bird in flight or moving vehicle.  I expect this&#8217;d be what most people want most of the time.  But sometimes you might really want to freeze the entire scene, even if you&#8217;re panning.<br><br>This is analogous to exposure compensation settings for use with flash.</li>
</ul>



<h3 class="wp-block-heading">Subject-aware shutter priority</h3>



<p>I&#8217;m quite surprised we don&#8217;t already have this, on at least <em>one</em> camera somewhere.</p>



<p>I want the camera to adjust the aperture intelligently to account for the subject&#8217;s depth and focal distance.  So that I can just set it to basically e.g. &#8220;whole head in focus&#8221;, and not worry about micro-managing the settings as the subject moves closer or further away.</p>



<p>It should handle multiple subjects too &#8211; e.g. for a group photo where people aren&#8217;t all neatly in the focus plane it should adjust the aperture to compensate.</p>



<p>Whether intrinsically or through e.g. lens profiles, it should account for curvature of field.</p>



<p>This could be flexible like Programmed Auto mode, where you could use a dial to adjust the depth of field if the camera&#8217;s selection doesn&#8217;t precisely suit your preferences (since you&#8217;ll be making trade-offs between in-focus subjects and background blur).</p>



<h2 class="wp-block-heading">Controls</h2>



<h3 class="wp-block-heading">Automatic grip selection</h3>



<p>I wish the camera could automatically detect which grip I&#8217;m using, so that I don&#8217;t need to micromanage it with a lock control.</p>



<p>Possibly this could be implemented through some kind of contact detection in the two grips, to tell which is being held?  I know it can&#8217;t use camera orientation, since it&#8217;s not uncommon to use either of the grips when they&#8217;re not oriented vertically.</p>



<p>It of course needs to be very reliable (erring, if necessary, on the side of allowing use of the controls vs ignoring them), and work in a wide variety of situations.  e.g. with or without gloves, whether the camera / hands are dry or wet, across a wide temperature range, with hands of various sizes, with hand-holds of various types, etc.</p>



<h3 class="wp-block-heading">Delete &amp; undo</h3>



<p>Currently to delete you have to push the delete button twice, because it prompts you to make sure you want to perform the delete.  This is nominally required because deletes are immediate and permanent.</p>



<p>The vast majority of the time, I <em>do</em> want to perform the delete. Very rarely is it a mistaken button press.</p>



<p>Doubling the button-presses required gets real old when you&#8217;re deleting thousands of photos (and while it&#8217;s faster to delete them on a computer, I prefer to do an initial cull in-camera to avoid wasting space on my computer and backups &#8211; plus if I&#8217;m travelling I may have limited card space and cannot wait until I&#8217;m back home).</p>



<p>It also doesn&#8217;t add much actual safety &#8211; it&#8217;s just hard-wired into my muscle memory to double-tap delete, and occasionally I&#8217;ll delete something I actually didn&#8217;t want to, as a result.  So the current system is inefficient <em>and</em> doesn&#8217;t work as intended.</p>



<p>What it should instead do is follow user interface best practices dating back to the eighties (if not earlier) &#8211; make the delete operation undoable, and therefore not need confirmation every time.</p>



<p>This could be implemented in a variety of ways, each with slight differences in trade-offs.  Even a rudimentary implementation, that only allows the most recent delete to be undone, would still be a huge improvement.</p>



<p>An even more robust system would likely not be much more work &#8211; e.g. move deleted photos to a separate &#8216;bin&#8217; folder, just like on a computer.  The camera could also make them auto-purge, so if the card is full it&#8217;ll start permanently deleting files from the bin as needed to recover space.</p>



<p>Consequently it&#8217;d be <em>much</em> safer &#8211; even against completely accidental delete button presses &#8211; and in-camera image review would involve about a third fewer button presses (currently two deletes plus left or right to move between images for comparison).</p>



<p>Note:  how you perform the undo, I&#8217;m not sure about.  The most common case would be undoing the most recent delete so there should be a way to do that which doesn&#8217;t completely interrupt your image review (i.e. no making you use the Menu button or otherwise switch away from the image you&#8217;re currently looking at).  It could be simply by hitting the &#8216;i&#8217; button and having an &#8216;Undo&#8217; option in that menu.</p>



<h3 class="wp-block-heading">Fix the portrait grip lock switch direction</h3>



<p>It currently rotates <em>opposite</em> to the main power switch (on the landscape grip), which is weird and confusing.  i.e. push the tab away from you to <em>unlock</em> the portrait controls, which on the landscape control turns the camera <em>off</em>.  When I pick the camera up I should be able to use the exact same motion to enable the controls irrespective of which grip I&#8217;m holding.</p>



<p>I&#8217;d love something that goes even further and lets you actually turn the camera on from the portrait grip controls, but I don&#8217;t see a good way to do that (it would interfere with the function of selecting which grip you want to be active).  Though this would be moot if the aforementioned automatic grip selection were supported.</p>



<h3 class="wp-block-heading">Subject detection configuration via customised buttons</h3>



<p>It&#8217;s great that the Z9 returns the functionality that the D500 et al had years ago, of letting you assign AF-ON <em>plus</em> a specific focus area mode to many buttons.  This is super essential for any camera in many circumstances &#8211; especially wildlife where you&#8217;re often dealing with obscured or unusual subjects.  It was <em>particularly</em> remiss of Nikon to leave this out of all their prior Z-mount cameras, since they had such subpar autofocus systems.</p>



<p>However, it still has some limitations in terms of configurability.  e.g. you <em>can</em> configure a button to turn subject detection on or off, but it has to be independent of actually engaging autofocus.  And you can&#8217;t configure it to <em>change</em> the subject detection mode (e.g. from &#8216;All&#8217; to &#8216;Animals&#8217;).</p>



<h2 class="wp-block-heading">Ergonomics</h2>



<h3 class="wp-block-heading">Lighter</h3>



<p>I almost didn&#8217;t call this out, except Canon proved with the R3 that you can shave a significant amount of weight with seemingly no downside.  That would be appreciated &#8211; it&#8217;d be right in line with Nikon&#8217;s impressive improvements to their telephoto lenses to make them <em>much</em> lighter than their DSLR forebearers.</p>



<h3 class="wp-block-heading">Symmetric function buttons in portrait vs landscape grips</h3>



<p>It&#8217;s baffling to me that there&#8217;s three customisable buttons next to the lens mount for the landscape grip, but none for the portrait grip; you can only reach <em>one</em> of the three buttons in portrait mode.</p>



<p>They should add another two buttons for the portrait grip, matching the relative positions of the landscape mode.</p>



<p>There&#8217;s still a challenge of button function, if they continue to share a button between the grips, since in landscape mode it&#8217;s under your pinky or ring finger while in portrait mode it&#8217;s under your index or pointer finger.  Ideally the camera would switch automatically depending on which grip you&#8217;re actually using, <em>iff</em> there&#8217;s a reliable way for it to detect that.  If not, it might be worth adjusting the button placements so that you have completely independent button sets between the two orientations (and at least mirror the settings between each set &#8211; though I wouldn&#8217;t object if they could also be customised independently).</p>



<h3 class="wp-block-heading">Smaller</h3>



<p>It could be smaller without compromising ergonomics &#8211; maybe 10-20%.  At least w.r.t. the grips.  It <em>barely</em> makes the list, though, since the main way to make it substantially smaller is to remove the portrait grip, which arguably defeats the point of a top-line camera.  That said, the Z8 (and the Sony Alpha 1 before it) have shown that there is a <em>strong</em> market for a flagship <em>without</em> built-in portrait grip.</p>



<p>Before I got the Z9 I was pretty sure a built-in portrait grip was <em>not</em> for me, though after getting used to the Z9 I&#8217;m now more on the fence.  I&#8217;ve had detachable portrait grips for prior cameras, and I recognise that they just don&#8217;t feel as good as a built-in grip.  They&#8217;re also heavier, and less robust.</p>



<h2 class="wp-block-heading">EVF / LCD</h2>



<h3 class="wp-block-heading">Larger LCD</h3>



<p>I don&#8217;t know how it might work ergonomically &#8211; good placement of physical buttons is definitely the priority, and there&#8217;s only so much space available on a reasonably-sized camera &#8211; but it would be really nice if the LCD were substantially bigger.  Compared to what we&#8217;re used to today with phones, camera LCDs are <em>tiny</em>.</p>



<p>It would need higher resolution to compensate.  I&#8217;m not <em>thrilled</em> with the Z9&#8217;s LCD pixel density, but it&#8217;s okay.  As long as the pixel density didn&#8217;t decrease, it&#8217;d be okay.</p>



<h3 class="wp-block-heading">Lower latency</h3>



<p>Though the Z9&#8217;s EVF latency appears to the best of any mirrorless camera to date (according to various test reports I&#8217;ve seen), there <em>is</em> still visible lag (even in 120Hz mode).  It&#8217;s not a big deal by any stretch, and the vast majority of the time I don&#8217;t perceive it.  It&#8217;s only if I&#8217;m moving really rapidly, especially if changing direction frequently.  However, even if I don&#8217;t typically <em>perceive</em> it, I wonder if it&#8217;s nonetheless having a negative impact on my performance with the camera.</p>



<p>I doubt that higher refresh rates are the solution, at least not directly.  The problem is the time it takes for photons hitting the sensor to be reflected in the EVF.  It might be technologically impossible to reduce the delay entirely (even before you hit the physical limits), but I hope there&#8217;s still improvement possible.</p>



<h3 class="wp-block-heading">Higher resolution EVF</h3>



<p>This didn&#8217;t initially make my list, but after much use I do think the Z9 EVF is a tad soft.  I can see the pixels, and I do find it&#8217;s a bit tricky to judge focus precisely (without digitally zooming in) &#8211; moreso than with an optical viewfinder.</p>



<p>Possibly related, I&#8217;m a bit mystified as to why image review in the EVF seems so blocky and pixelated compared to on the rear LCD, given the latter is objectively much lower resolution.  It seemingly can&#8217;t be a hardware problem &#8211; perhaps a software error?  Whatever it is, fixing it would essentially increase the resolution too, for image review.</p>



<p>Note also that I&#8217;m focused on the EVF specifically here.  Curiously I don&#8217;t see the pixels on the LCD, or at least I never notice them.  I think because the viewing distance is so much farther away.  I certainly wouldn&#8217;t object to a higher pixel-density LCD too, but it&#8217;s not something I really need.</p>



<h2 class="wp-block-heading" id="the-usuals">The rest &amp; the usuals</h2>



<p>None of these last few items are what I would call critical nor actually highlight.  They tend to improve <em>incrementally</em> over time in any case.  Those improvements are important and appreciated but not noteworthy unless there&#8217;s an unusually big leap.</p>



<p>Though admittedly it would be <em>particularly</em> good to at least match the state of the art w.r.t. image quality (or even of much older cameras like the D850).</p>



<ul class="wp-block-list">
<li>Less noise.</li>



<li>Higher resolution.  Though I don&#8217;t want to sacrifice anything for minor resolution gains &#8211; e.g. to go up to 60MP.  For a major jump &#8211; e.g. to 100MP &#8211; I might be willing to trade off other aspects of performance.</li>



<li>Better battery life when the camera is left on.  As much as its start-up delay is relatively brief compared to most cameras, it&#8217;s still far from zero and in any case it costs time to locate &amp; operate the power button every time I bring the camera to my eye.</li>



<li>CFExpress 4.0 support, for at least a doubling in write speed (although the Z9 currently uses barely more than half the available write performance of CFExpress 2.0 anyway, so in fact there&#8217;s room for nearly a 4x improvement with current technology).</li>
</ul>
]]></content:encoded>
					
					<wfw:commentRss>https://wadetregaskis.com/z9-ii-wishlist/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">5022</post-id>	</item>
		<item>
		<title>People vs Products</title>
		<link>https://wadetregaskis.com/people-vs-products/</link>
					<comments>https://wadetregaskis.com/people-vs-products/#respond</comments>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Fri, 28 Aug 2020 23:04:28 +0000</pubDate>
				<category><![CDATA[Coding]]></category>
		<category><![CDATA[Ideas]]></category>
		<category><![CDATA[Ramblings]]></category>
		<category><![CDATA[Apple]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[leadership]]></category>
		<category><![CDATA[LinkedIn]]></category>
		<category><![CDATA[management]]></category>
		<category><![CDATA[people manager]]></category>
		<category><![CDATA[Peter Principle]]></category>
		<category><![CDATA[technical lead]]></category>
		<guid isPermaLink="false">https://blog.wadetregaskis.com/?p=4580</guid>

					<description><![CDATA[I&#8217;ve experienced an interesting arc over my twenty or so years (thus far) of software development. I started out as a one-person shop, doing my own things, selling shareware. I had no manager nor technical lead. I had to make all my own decisions, in all aspects, without guidance or assistance. Subsequently, during my four&#8230; <a class="read-more-link" href="https://wadetregaskis.com/people-vs-products/" data-wpel-link="internal">Read more</a>]]></description>
										<content:encoded><![CDATA[
<p>I&#8217;ve experienced an interesting arc over my twenty or so years (thus far) of software development.</p>



<p>I started out as a one-person shop, doing my own things, selling shareware.  I had no manager nor technical lead.  I had to make all my own decisions, in all aspects, without guidance or assistance.</p>



<p>Subsequently, during my four years at Apple, I did have a manager, but they focused on people, not the technical &#8211; myself and/or my colleagues collectively made the technical decisions, and provided technical leadership, and effectively set the product direction.  My managers were there to make that as easy as possible for us.</p>



<p>Over my nearly eight years at Google, I observed the tail half of a major cultural transition for Google.  Long before I started, Google had explicitly laid down a culture where managers were not product / technical leads.  The two roles were physically separated, between different people, and they operated independently.  Managers focused on people &#8211; career growth, happiness, basic productivity, &amp; skills &#8211; while tech leads focused on the technical, the product.  In fact the manager role was so principled about focus on people that managers would sometimes help their direct reports <em>leave the company</em>, if that was simply what was best for those people for their own success &amp; growth.  And, to be clear, not in a &#8220;you aren&#8217;t working out&#8221; sense, but for engineers that were excellent and simply didn&#8217;t have deserved opportunities available to them at Google.</p>



<p>By the time I joined, that culture was half-gone, but still present enough in my division for me to experience it.  But by the time I left the culture was heavily weighted towards managers being technical leads.</p>



<p>In my nearly three years now at LinkedIn, I&#8217;ve completed that arc.  LinkedIn culturally &amp; executively emphasises managers as technical / product leads even moreso than Google ever did.  As far as I&#8217;ve been told, LinkedIn always has (meaning, this is presumably the culture Yahoo had too, from which LinkedIn forked).</p>



<p>Having experienced most of this spectrum, I finally feel qualified to pass judgement on it.</p>



<figure class="wp-block-pullquote"><blockquote><p>Managers should not be leads.</p></blockquote></figure>



<p>I immediately, intuitively recognised &amp; appreciated this at Google, but now I&#8217;m certain of it.</p>



<p>People management &amp; (technical) product leadership are fundamentally at odds with each other.  The needs of individuals are often at odds with the needs of the product.  The product might need Natalie to really focus on churning through a bunch of menial tasks, but to evolve, Natalie might really need design experience &amp; leadership opportunities.</p>



<p>Having one person (in authority) try to wear both hats creates conflict, bias, and inefficiency.  It discourages dialogue, because you can never <em>really</em> trust where the polymorph stands.  The roles require different skillsets, which rarely coexist in a single person and in any case are difficult to keep up to date in parallel.  Context-switching between them is burdensome.  It creates a power imbalance and perverse incentives.</p>



<p>Even if an individual is exceptionally talented at mitigating those problems, they simply don&#8217;t have the time to do both well.  Being a product or technical lead is <em>at least</em> a full-time job.  Likewise, helping a team of any real size grow as individuals requires way more hands-on, one-on-one attention than most people realise.  It&#8217;s hard enough being good at either one of them alone &#8211; anyone that attempts doing both simultaneously ends up doing neither effectively.</p>



<p>I&#8217;ve had the opportunity to be both a technical lead <em>only</em> and a manager <em>only</em>.  This is quite rare in the tech industry.  I deeply appreciated being able to focus on <em>just one</em> of those roles at a time.  I could be consistent, deliberate, and <em>honest</em>.  I could, as a manager, tell people exactly what I thought they should or shouldn&#8217;t work on, irrespective of what the product(s) need, because I knew the technical lead(s) would worry about those angles.  Conversely, when I was a technical lead, I could lay out what was simply, objectively best for the project, uncomplicated by individuals&#8217; interests.  In either case, there was real, other human being that could be debated with, as necessary, to find happy mediums.</p>



<p>Yet beyond just being more efficient and effective, the serendipitous consequence was that it <em>gave agency to the individuals</em> &#8211; whenever a conflict arose between people and products, it was revealed to them, and the implicit decision about it at least in part theirs to make.  Most importantly, they knew that <em>whichever</em> way they leaned they had someone in their corner who had their back.</p>



<p>(Of course, sometimes they didn&#8217;t <em>like</em> having to make that decision, but putting it on them forced them to take control and responsibility for themselves, and evolve into more confident, happy, motivated developers.)</p>



<p>I suppose it&#8217;s no surprise that companies tends this way &#8211; to conflate people with products.  These days, for many big tech companies, people literally <em>are</em> the products, and their humanity inevitably stripped away in the process.  People are &#8220;promoted&#8221; into management from technical positions, and often by way of <a href="https://en.wikipedia.org/wiki/Peter_principle" data-wpel-link="external" target="_blank" rel="external noopener">the Peter Principle</a>, are not actually good people managers, <em>nor</em> able to relinquish their former role and ways of thinking.  A hierarchy of technical leads in manager&#8217;s clothing becomes self-sustaining, self-selecting, and self-enforcing.</p>



<p>The question is:  what&#8217;s the antidote?</p>



<p>Acknowledgement:  I was inspired to pen this post by reading <a href="https://www.linkedin.com/in/rtwortham/" data-wpel-link="external" target="_blank" rel="external noopener">Tanner Wortham</a>&#8216;s <a href="https://worth.am/manager-product-owner-fail/" data-wpel-link="external" target="_blank" rel="external noopener">Why Manager as Product Owner Will Usually Fail</a>, which is essentially positing the same thing albeit in different terminology.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://wadetregaskis.com/people-vs-products/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4580</post-id>	</item>
		<item>
		<title>Remix</title>
		<link>https://wadetregaskis.com/remix/</link>
					<comments>https://wadetregaskis.com/remix/#respond</comments>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Wed, 17 Dec 2008 05:39:37 +0000</pubDate>
				<category><![CDATA[Ideas]]></category>
		<guid isPermaLink="false">http://E20081216213937</guid>

					<description><![CDATA[I went to see Lawrence Lessig give a talk this evening, at the Computer History Museum, on the topic of copyright. It&#8217;s an issue which concerns me, as should it to anyone of my generation, given the ridiculous state of it today. I&#8217;ve never heard Lessig speak before, nor read any of his books, but&#8230; <a class="read-more-link" href="https://wadetregaskis.com/remix/" data-wpel-link="internal">Read more</a>]]></description>
										<content:encoded><![CDATA[<p><font>I went to see Lawrence Lessig give a talk this evening, at the Computer History Museum, on the topic of copyright.  It&#8217;s an issue which concerns me, as should it to anyone of my generation, given the ridiculous state of it today.  I&#8217;ve never heard Lessig speak before, nor read any of his books, but I&#8217;m well familiar with him by reputation, so I figured even if it was a boring talk overall, he&#8217;s someone worth listening to.</font></p>
<p><font>I was surprised by how well he spoke.  I can&#8217;t recall a better speech, at least not from recent memory.  I suppose it&#8217;s not exceptional, given he was of course summarising his most recent book, and as someone who&#8217;s lived and breathed the topic for the last decade, he very well should be well versed in it.</font></p>
<p><font>He didn&#8217;t say anything I didn&#8217;t already know, but rather everything I didn&#8217;t know I knew.  (sorry, couldn&#8217;t resist)  I&#8217;d call him brilliant if he hadn&#8217;t so eloquently convinced me that his opinion is just so blindingly obvious &#8211; that copyright today, the legal interpretation of ancient laws &#8211; simply hasn&#8217;t been <i>applied</i> correctly to the world of today (the &#8220;digital&#8221; world, I suppose).  He was swift to clarify that he was not an abolitionist, when it comes to copyright, but simply wants to see it applied correctly.  It seems hard to argue against that; he&#8217;s clearly right, and you&#8217;d have to wonder why anyone would disagree.</font></p>
<p><font>Though of course he pointed out why they would &#8211; selfishness, greed; money, and power.  Which apparently has segued him into his focus for the next decade, once he takes up residence at Harvard next year, on political corruption.  I particularly resonated with the way he presented the issue, which wasn&#8217;t alarmist or extremist, but rather in a kind of forlorn way.  Everyone is deservedly cynical about politics and the massive corruption perceived within it, so again it seems like he&#8217;s just comfortable and certain that everyone already knows this, and he&#8217;s not trying to condescend his audience by stating the obvious, but rather encourage action to effect real change.</font></p>
<p><font>I bought his book on the way out, leery as I was of the potential commercialism or hypocrisy in doing so &#8211; it&#8217;s all about selling books as a public speaker, isn&#8217;t it? :) &#8211; so if nothing else that&#8217;ll keep me occupied through a few otherwise idle evenings.  I&#8217;m not sure what it&#8217;ll add, given his presentation seemed to convey his points so well already, but hopefully it&#8217;s just as impressive.</font></p>
]]></content:encoded>
					
					<wfw:commentRss>https://wadetregaskis.com/remix/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1616</post-id>	</item>
		<item>
		<title>AppleScript to calculate &#8220;most liked&#8221; band</title>
		<link>https://wadetregaskis.com/applescript-to-calculate-most-liked-band/</link>
					<comments>https://wadetregaskis.com/applescript-to-calculate-most-liked-band/#respond</comments>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Thu, 08 Jun 2006 12:14:30 +0000</pubDate>
				<category><![CDATA[Ideas]]></category>
		<guid isPermaLink="false">http://E20060608221430</guid>

					<description><![CDATA[Ordering by most plays or rating only works on a per song basis in iTunes. So, write an AppleScript to calculate some kind of per-artist (and possibly per-album) ranking. A simple implementation would be to sum the ratings of all the songs by each artist, or sum their play count. Alternatives include just an average&#8230; <a class="read-more-link" href="https://wadetregaskis.com/applescript-to-calculate-most-liked-band/" data-wpel-link="internal">Read more</a>]]></description>
										<content:encoded><![CDATA[<p><font>Ordering by most plays or rating only works on a per song basis in iTunes.  So, write an AppleScript to calculate some kind of per-artist (and possibly per-album) ranking.  A simple implementation would be to sum the ratings of all the songs by each artist, or sum their play count.  Alternatives include just an average of each artist&#8217;s song ratings or play count.  But that would unfairly promote one hit wonders.  &#8216;course, you could then sort by number of songs by the artist&#8230;</font></p>
<p><font>Worth playing with, anyway.</font></p>
]]></content:encoded>
					
					<wfw:commentRss>https://wadetregaskis.com/applescript-to-calculate-most-liked-band/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1615</post-id>	</item>
		<item>
		<title>The Sims 3</title>
		<link>https://wadetregaskis.com/the-sims-3/</link>
					<comments>https://wadetregaskis.com/the-sims-3/#respond</comments>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Fri, 11 Nov 2005 15:13:31 +0000</pubDate>
				<category><![CDATA[Ideas]]></category>
		<guid isPermaLink="false">http://E20051112021331</guid>

					<description><![CDATA[It strikes me as odd that games like The Sims or Space Colony always have you forcing your little simulated personalities to train in different things. It&#8217;s the generic level-up scheme adopted by computer games since the year dot. And it&#8217;s really boring. And unrealistic. In reality, I don&#8217;t decide &#8220;hey, I&#8217;m going to learn&#8230; <a class="read-more-link" href="https://wadetregaskis.com/the-sims-3/" data-wpel-link="internal">Read more</a>]]></description>
										<content:encoded><![CDATA[<p><font>It strikes me as odd that games like The Sims or Space Colony always have you forcing your little simulated personalities to train in different things.  It&#8217;s the generic level-up scheme adopted by computer games since the year dot.  And it&#8217;s really boring.  And unrealistic.</font></p>
<p><font>In reality, I don&#8217;t decide &#8220;hey, I&#8217;m going to learn X% of mechanics, because&#8221;&#8230; I say &#8220;crap, my car&#8217;s broken again&#8230; time to get out some books on introductory mechanics&#8221;.  It&#8217;s the problem that drives us to learn, not some arbitrary god figure clicking insistently on the bookshelf.</font></p>
<p><font>So I&#8217;d like to see Sims-style games take a different approach in future &#8211; make it part of the gameplay that you have to </font><font face="Helvetica-Oblique"><i>challenge</i></font><font> your Sims, which forces them to learn, to adapt, to evolve.  Perhaps the way-hyped &#8220;Spore&#8221; will do just that&#8230; who knows.</font></p>
<p><font>It&#8217;d certainly be much more interesting to think up carrots with which to improve the minds and bodies of my Sims, than to just keep applying the stick all the time.</font></p>
]]></content:encoded>
					
					<wfw:commentRss>https://wadetregaskis.com/the-sims-3/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1614</post-id>	</item>
		<item>
		<title>Coding for stability</title>
		<link>https://wadetregaskis.com/coding-for-stability/</link>
					<comments>https://wadetregaskis.com/coding-for-stability/#respond</comments>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Mon, 24 Oct 2005 12:35:46 +0000</pubDate>
				<category><![CDATA[Ideas]]></category>
		<guid isPermaLink="false">http://E20051024223546</guid>

					<description><![CDATA[You know, it always kills me, the Linux vs MINIX debate, Linus vs Tanenbaum. Everyone loves Tanenbaum&#8217;s ideals, but can&#8217;t refute Linus&#8217; pudding &#8211; the orders of magnitude faster Linux is than MINIX. You&#8217;d think that with the focus these days more on coding for reliability, simplicity, elegance and maintainability &#8211; all at the expense&#8230; <a class="read-more-link" href="https://wadetregaskis.com/coding-for-stability/" data-wpel-link="internal">Read more</a>]]></description>
										<content:encoded><![CDATA[
<p>You know, it always kills me, the Linux vs MINIX debate, Linus vs Tanenbaum. Everyone loves Tanenbaum&#8217;s ideals, but can&#8217;t refute Linus&#8217; pudding &#8211; the orders of magnitude faster Linux is than MINIX. You&#8217;d think that with the focus these days more on coding for reliability, simplicity, elegance and maintainability &#8211; all at the expense of performance &#8211; would swing the argument in favour of the microkernel architecture. Yet it doesn&#8217;t. Sure, MacOS X uses a so-called microkernel architecture, although Apple are the first to admit that it&#8217;s really some kind of weird hybrid. They just couldn&#8217;t get the performance they <em>needed</em> out of a pure microkernel implementation.</p>



<p>But another reason perhaps is that the arguments in favour of microkernels are largely fluff. The advantages in stability and security oft-touted don&#8217;t go nearly as far as the proponents would like us to believe. The common line is:</p>



<blockquote class="wp-block-quote is-layout-flow wp-block-quote-is-layout-flow">
<p>&#8220;For example, each device driver runs as a separate user-mode process so a bug in a driver (by far the biggest source of bugs in any operating system), cannot bring down the entire OS. In fact, most of the time when a driver crashes it is automatically replaced without requiring any user intervention, without requiring rebooting, and without affecting running programs. These features, the tiny amount of kernel code, and other aspects greatly enhance system <a href="https://web.archive.org/web/20051029025533/https://www.minix3.org/reliability.html" data-wpel-link="external" target="_blank" rel="external noopener">reliability</a>.&#8221;</p>
<cite><a href="https://web.archive.org/web/20051029030127/https://www.minix3.org/index.html" data-wpel-link="external" target="_blank" rel="external noopener">The MINIX 3 website</a></cite></blockquote>



<p>Now, let&#8217;s just hold our horses there. Sure, if an <em>unused</em> driver crashes, it can be reloaded and none will be the wiser. But that&#8217;s not how it works, is it? See, drivers have <em>state</em>. And when they crash, they lose that state. All the hyperbole in the world isn&#8217;t going to magically restore it. Even if you could, should you? Having the reloaded driver restored to the state it was in just before it crashed may not do any better the second time around &#8211; it may just crash exactly the same way again.</p>



<p>And what about everything else that&#8217;s using the dead driver? Well shit. I mean, we just don&#8217;t think about these issues properly. If I&#8217;m writing to disk via some file system driver, which crashes sometime during the write, I&#8217;m in trouble. Can the OS automagically restore the driver and continue or repeat the write without me knowing? I doubt it. So what happens? Well, I guess my program will get an error back from the write. But what if the write did actually go down to the physical media before the crash? Oh oh.</p>



<p>So we have all this journalling and so forth&#8230; but really, it&#8217;s a bad solution in the long term; journalling just adds more places where things can go wrong.</p>



<p>So now our driver has to be able to figure out what it&#8217;s already done. Maybe it can do that, sure. But does it? In today&#8217;s drivers? I doubt it.</p>



<p>You see, the focus on microkernel&#8217;s is really just taking to a wheat harvest with a pocket knife. It&#8217;s not thinking on the appropriate scale. What microkernels critically provide is simple, defined and protected interfaces between modules. That&#8217;s all it is.</p>



<p>But, you see, the best place to do everything is at compile time, not run time. Errors at run time piss off your users, since they&#8217;re the ones running them. No, what we need are smarter compilers. Compilers with defined limits on parameters, more explicit type checking.</p>



<p>And yet I&#8217;m a big fan of Objective-C? Why is that? Can these two bipolar titans be married? I like to think so. See, let&#8217;s take to an example to explain this simply. I have a function for doing logs, like so (in traditional C):</p>



<div class="wp-block-kevinbatdorf-code-block-pro padding-disabled" data-code-block-pro-font-family="" style="font-size:.875rem;line-height:1.25rem;--cbp-tab-width:2;tab-size:var(--cbp-tab-width, 2)"><pre class="shiki light-plus" style="background-color: #FFFFFF" tabindex="0"><code><span class="line"><span style="color: #0000FF">double</span><span style="color: #000000"> </span><span style="color: #795E26">loge</span><span style="color: #000000">(</span><span style="color: #0000FF">double</span><span style="color: #000000"> </span><span style="color: #001080">x</span><span style="color: #000000">) {</span></span>
<span class="line"><span style="color: #008000">  /* Perform magic arithmetic here */</span></span>
<span class="line"><span style="color: #000000">  </span><span style="color: #AF00DB">return</span><span style="color: #000000"> result;</span></span>
<span class="line"><span style="color: #000000">}</span></span></code></pre></div>



<p>Now that&#8217;s not much good, really. What if someone tries to pass a negative value, or zero? Well that&#8217;s not much good; natural log&#8217;s not defined (in the real domain; we are working with <em>doubles</em> here) for negative values (and loge(0) goes to negative infinity as a limit; it has no actual value). So, sure, the standard way of going about this is to either use algorithms which handle this implicitly, or more generically to do parameter checking:</p>



<div class="wp-block-kevinbatdorf-code-block-pro padding-disabled" data-code-block-pro-font-family="" style="font-size:.875rem;line-height:1.25rem;--cbp-tab-width:2;tab-size:var(--cbp-tab-width, 2)"><pre class="shiki light-plus" style="background-color: #FFFFFF" tabindex="0"><code><span class="line"><span style="color: #0000FF">double</span><span style="color: #000000"> </span><span style="color: #795E26">loge</span><span style="color: #000000">(</span><span style="color: #0000FF">double</span><span style="color: #000000"> </span><span style="color: #001080">x</span><span style="color: #000000">) {</span></span>
<span class="line"><span style="color: #000000">  </span><span style="color: #AF00DB">if</span><span style="color: #000000"> (</span><span style="color: #098658">0.0</span><span style="color: #000000"> &gt;= x) {</span></span>
<span class="line"><span style="color: #008000">    // Barf!!!</span></span>
<span class="line"><span style="color: #000000">  } </span><span style="color: #AF00DB">else</span><span style="color: #000000"> {</span></span>
<span class="line"><span style="color: #008000">    /* Perform magic arithmetic here */</span></span>
<span class="line"><span style="color: #000000">    </span><span style="color: #AF00DB">return</span><span style="color: #000000"> result;</span></span>
<span class="line"><span style="color: #000000">  }</span></span>
<span class="line"><span style="color: #000000">}</span></span></code></pre></div>



<p>Now, here&#8217;s the problem&#8230; how do we &#8220;barf&#8221;? Do we return a symbolic value that represents NaN (Not a Number)? Perhaps we just return 0? Perhaps we raise an exception (if we&#8217;re using C++)? Perhaps we call exit()? Perhaps we just while (1) {} just to annoy the caller?</p>



<p>Is the compiler smart enough to realise that we&#8217;re using a bad value when we invoke the function? Nope. Even if I use assert() or some similar standard procedure, the compiler won&#8217;t even issue a warning on the parameter. Compilers just don&#8217;t do range or domain checking (at least, gcc doesn&#8217;t). It would be a good start if they did.</p>



<p>So what we need are stricter types &#8211; we need definition&#8217;s of domains and conditions. Imagine, if you can, if we could do this:</p>



<div class="wp-block-kevinbatdorf-code-block-pro padding-disabled" data-code-block-pro-font-family="" style="font-size:.875rem;line-height:1.25rem;--cbp-tab-width:2;tab-size:var(--cbp-tab-width, 2)"><pre class="shiki light-plus" style="background-color: #FFFFFF" tabindex="0"><code><span class="line"><span style="color: #0000FF">double</span><span style="color: #000000"> </span><span style="color: #795E26">loge</span><span style="color: #000000">(</span><span style="color: #0000FF">double</span><span style="color: #000000"> </span><span style="color: #001080">x</span><span style="color: #000000">) </span><span style="color: #795E26">where</span><span style="color: #000000"> (</span><span style="color: #098658">0.0</span><span style="color: #000000"> &lt; x) {</span></span>
<span class="line"><span style="color: #008000">  /* Perform magic arithmetic here */</span></span>
<span class="line"><span style="color: #000000">  </span><span style="color: #AF00DB">return</span><span style="color: #000000"> result;</span></span>
<span class="line"><span style="color: #000000">}</span></span></code></pre></div>



<p>In this mythical language &#8211; let&#8217;s call it ++C &#8211; the compiler would issue an error if we tried to violate the explicit condition we&#8217;ve provided. Thus it would be impossible, given a suitably intelligent compiler, to call this function with an inappropriate parameter.</p>



<p>Of course, people will say &#8220;but how is the compiler to know what values we&#8217;re going to use if we&#8217;re taking them from the user, for example?&#8221;. &#8220;Output one, accept any&#8221;. Sources such as files could have any value of course, so the compiler should assume any possible value. Thus, if we wanted to take our value &#8220;input&#8221; and pass it to loge, we&#8217;d have to do our own explicit range checking <em>in the caller</em>.</p>



<p>So, we&#8217;ve just shifted the problem to a different place, right? Well, yes and no. Basically yes, which is a good thing on it&#8217;s own. Now the caller &#8211; which is going to know more about the data than the callee &#8211; must make the decisions about how to handle invalid data. Excellent. We move input validation right to the very top of our program, where it should be. This is the way it should always be done anyway &#8211; it keeps your core code simpler, leaner and faster, since it doesn&#8217;t have to perform redundant parameter checking. In reality, because there is no compiler enforcement of parameter limitations like this, we end up with a lot of redundancy as checks are put in at multiple layers, just to be safe. Even then, we still miss things.</p>



<p>This could even be integrated into existing compilers in a completely backwards compatible way, by having some kind of error marker which the compiler could use to determine which code branches should never intentionally be taken. For example, the compiler could see the following:</p>



<div class="wp-block-kevinbatdorf-code-block-pro padding-disabled" data-code-block-pro-font-family="" style="font-size:.875rem;line-height:1.25rem;--cbp-tab-width:2;tab-size:var(--cbp-tab-width, 2)"><pre class="shiki light-plus" style="background-color: #FFFFFF" tabindex="0"><code><span class="line"><span style="color: #0000FF">double</span><span style="color: #000000"> </span><span style="color: #795E26">loge</span><span style="color: #000000">(</span><span style="color: #0000FF">double</span><span style="color: #000000"> </span><span style="color: #001080">x</span><span style="color: #000000">) {</span></span>
<span class="line"><span style="color: #000000">  </span><span style="color: #AF00DB">if</span><span style="color: #000000"> (</span><span style="color: #098658">0.0</span><span style="color: #000000"> &gt;= x) {</span></span>
<span class="line"><span style="color: #000000">    </span><span style="color: #795E26">INVALID_PARAMETER</span><span style="color: #000000">(x);</span></span>
<span class="line"><span style="color: #008000">    // Barf!!!</span></span>
<span class="line"><span style="color: #000000">  } </span><span style="color: #AF00DB">else</span><span style="color: #000000"> {</span></span>
<span class="line"><span style="color: #008000">    /* Perform magic arithmetic here */</span></span>
<span class="line"><span style="color: #000000">    </span><span style="color: #AF00DB">return</span><span style="color: #000000"> result;</span></span>
<span class="line"><span style="color: #000000">  }</span></span>
<span class="line"><span style="color: #000000">}</span></span></code></pre></div>



<p>It could perform the same kind of analysis as talked about previously, but instead of an explicit addition to the language syntax, it just looks for this <code>INVALID_PARAMETER</code> macro invocation. If it&#8217;s analysis indicates it&#8217;s possible to reach this point in the code, it can issue an error &#8211; or at the very least a warning. This can be tied in with dead code stripping as well; if the compiler can prove at compile time that no <code>INVALID_PARAMETER</code>&#8216;s are reached, it can remove all the code in the same scope as the invocation. Fantastic!</p>



<p>I should add, there&#8217;s a language called D which I believe tries to adopt something like this. I&#8217;m yet to encounter a D compiler &#8211; although I haven&#8217;t looked &#8211; but until it makes it into gcc, it&#8217;s not going to get the large-scale support such features need.</p>



<p>So, with this advanced compile-time analysis, we get the benefits of microkernels &#8211; and more &#8211; without the runtime performance costs. We can all follow the Linux standards and code like psychotic schizophrenics, and still get the safety we need.</p>



<p>Well, okay, so there&#8217;s a lot of other stability and security issues beyond just this, but I think it alleviates a major fraction of the problems. Throw in more intelligent compiler behaviour regarding pointers, arrays, etc, and we&#8217;ll be set.</p>
]]></content:encoded>
					
					<wfw:commentRss>https://wadetregaskis.com/coding-for-stability/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1613</post-id>	</item>
		<item>
		<title>Chameleon</title>
		<link>https://wadetregaskis.com/chameleon/</link>
					<comments>https://wadetregaskis.com/chameleon/#respond</comments>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Wed, 19 Oct 2005 12:07:25 +0000</pubDate>
				<category><![CDATA[Ideas]]></category>
		<guid isPermaLink="false">http://E20051019220725</guid>

					<description><![CDATA[A while ago, when I first released Rotated Windows, I used sitx or somesuch as the archive format. Someone complained &#8211; of course &#8211; so as a bit of a sarcastic riposte I also put up archives in every other format I could &#8211; zip, bz2, gzip, sit, arj, rar, etc. It was a laugh,&#8230; <a class="read-more-link" href="https://wadetregaskis.com/chameleon/" data-wpel-link="internal">Read more</a>]]></description>
										<content:encoded><![CDATA[
<p><span>A while ago, when I first released Rotated Windows, I used sitx or somesuch as the archive format. Someone complained &#8211; of course &#8211; so as a bit of a sarcastic riposte I also put up archives in every other format I could &#8211; zip, bz2, gzip, sit, arj, rar, etc. It was a laugh, I guess, and an interesting comparison of archive formats.</span></p>



<p><span>[Incidentally, I believe it was bzip2 that was the top performer. sitx did reasonably well, but I think has more overhead than zip or gzip formats, and consequently lost out to them on a piddly ~70k archive.]</span></p>



<p><span>And I was just reading this article <a href="https://web.archive.org/web/20051022000911/https://linuxdevices.com/news/NS7231044963.html" data-wpel-link="external" target="_blank" rel="external noopener">here</a>, which talks about a company called Neuros Audio soliciting feedback from the &#8220;hacker&#8221; community as to what direction they should take with their future product(s). They go on about how important it is to them to get involved with the hacker community, and how they&#8217;re using Linux 2.6 and uber-leet audiophile-worthy DAC/ADCs, etc&#8230; and then at the end there&#8217;s the note:</span></p>



<p><span style="font-family: Arial-ItalicMT;"><i>More details, including a downloadable 18-page Word document describing the current development board specification, can be found <a href="https://web.archive.org/web/20050407073714/http://open.neurosaudio.com/" data-wpel-link="external" target="_blank" rel="external noopener">here</a>.</i></span></p>



<p><span style="font-family: ArialMT;">Ummm&#8230; what? It&#8217;s a Word document&#8230; for a document supposedly aimed at Linux users. Duh.</span></p>



<p><span style="font-family: ArialMT;">So I thought&#8230; really, there&#8217;s always going to be someone complaining about the format, just as I found out myself with Rotated Windows. So why not provide a whole bunch of different formats? The immediate pessimistic response is because it makes the download process more convoluted &#8211; asking users to pick a format, when they may not really be aware of the differences between them or what their computer can work with.</span></p>



<p><span style="font-family: ArialMT;">But the optimist would say we just need better technology, that allows us to indicate which formats we prefer, and have the conversion performed &#8211; on the fly if possible &#8211; at the server end. If the conversion is done there (rather than using importers or conversion tools locally, which is difficult for many average users) it can be done properly &#8211; i.e. they can proof each differently formatted version to ensure the conversion went smoothly.</span></p>



<p><span style="font-family: ArialMT;">I call this very trademarkable idea Chameleon. In my mind the browser would maintain a list of preferred document formats for different MIME types, which it could submit to the server when requesting the files (or when otherwise prompted, if a 2-stage fetch is necessary). The server could then do what it could to work within that preference&#8230;</span></p>



<p><span style="font-family: ArialMT;">&#8230;I think it&#8217;s a sweet idea. A trivial demo would be to use php and ImageMagick to convert images on the fly to whatever format each user has previously stated they prefer. Beyond that you could use converters for different text formats, and then perhaps even more complex things like spreadsheets, databases, etc.</span></p>



<p><span style="font-family: ArialMT;">So if anyone makes their millions of this idea, don&#8217;t forget about me. ;)</span></p>
]]></content:encoded>
					
					<wfw:commentRss>https://wadetregaskis.com/chameleon/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1612</post-id>	</item>
		<item>
		<title>Purposefully mangling arrays of structs</title>
		<link>https://wadetregaskis.com/purposefully-mangling-arrays-of-structs/</link>
					<comments>https://wadetregaskis.com/purposefully-mangling-arrays-of-structs/#respond</comments>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Sun, 09 Oct 2005 13:03:20 +0000</pubDate>
				<category><![CDATA[Ideas]]></category>
		<guid isPermaLink="false">http://E20051009230320</guid>

					<description><![CDATA[It&#8217;s a pretty common scenario to have an array of some structs, where you frequently iterate through the array using only one field in the struct. This is a cache nightmare &#8211; memory is loaded into cache sequentially by prefetching, meaning you&#8217;re wasting all that bandwidth loading all the other fields of the struct that&#8230; <a class="read-more-link" href="https://wadetregaskis.com/purposefully-mangling-arrays-of-structs/" data-wpel-link="internal">Read more</a>]]></description>
										<content:encoded><![CDATA[<p><font>It&#8217;s a pretty common scenario to have an array of some structs, where you frequently iterate through the array using only one field in the struct.  This is a cache nightmare &#8211; memory is loaded into cache sequentially by prefetching, meaning you&#8217;re wasting all that bandwidth loading all the other fields of the struct that you&#8217;re not interested in.  A far more efficient way is to have each &#8220;field&#8221; be in a separate array, and do all the careful maintenance of this set of arrays.</font></p>
<p><font>Of course, that&#8217;s an annoying way to program, and accident prone.  A much better way would be if the compiler could do this magically for you, by pulling the struct apart behind the scenes and storing it as such.  In this way cache would be used much more effectively, memory bandwidth usage reduced substantially, and everyone made happier.</font></p>
<p><font>There are of course issues with this, because modifying the length of the array would be more expensive with multiple arrays behind the scenes, and it may be difficult to determine automagically which approach is better [in programs which have multiple distinct access patterns to that array].  Still, it seems like it should be an option &#8211; there&#8217;s plenty of cases where such an optimisation could be made, and yield significant improvements.</font></p>
<p><font>Now if only I knew the intimate ins and outs of the gcc 4.x source.  D&#8217;oh. :)</font></p>
]]></content:encoded>
					
					<wfw:commentRss>https://wadetregaskis.com/purposefully-mangling-arrays-of-structs/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1611</post-id>	</item>
		<item>
		<title>Real-time backup</title>
		<link>https://wadetregaskis.com/real-time-backup/</link>
					<comments>https://wadetregaskis.com/real-time-backup/#respond</comments>
		
		<dc:creator><![CDATA[]]></dc:creator>
		<pubDate>Wed, 05 Oct 2005 03:26:43 +0000</pubDate>
				<category><![CDATA[Ideas]]></category>
		<guid isPermaLink="false">http://E20051005132643</guid>

					<description><![CDATA[An idea I had a while ago was to implement a real-time backup system, which saves copies of files as they&#8217;re modified to a 2nd location. In the simplest form this would be a kind of manual mirrored RAID setup, which could work at the directory or volume level rather than disk level. The problem&#8230; <a class="read-more-link" href="https://wadetregaskis.com/real-time-backup/" data-wpel-link="internal">Read more</a>]]></description>
										<content:encoded><![CDATA[<p><font>An idea I had a while ago was to implement a real-time backup system, which saves copies of files as they&#8217;re modified to a 2nd location.  In the simplest form this would be a kind of manual mirrored RAID setup, which could work at the directory or volume level rather than disk level.</font></p>
<p><font>The problem I had was figuring out how to get notifications of every file notification.  It can be done &#8211; Spotlight obviously gets such notifications.  But it can&#8217;t be done using kqueues (as far as I can tell, anyway), and I&#8217;m not aware of any other public API for doing so.  I&#8217;m not really </font><font face="Helvetica-Oblique"><i>that</i></font><font> interested in the idea to reverse engineer half the system trying to discover the private mechanism&#8230; so the idea languished.</font></p>
<p><font>Until two things struck me &#8211; Apple just posted <a href="https://developer.apple.com/documentation/Darwin/Reference/usr_APIs/kern_event/index.html" data-wpel-link="external" target="_blank" rel="external noopener">this</a> article on the kern_control.h &amp; kern_event.h headers in /usr/include/sys/&#8230;. it seems to me &#8211; at a cursory glance &#8211; these might provide the mechanism for receiving general I/O events, which could be broken apart to discover their nature, scope and relevant details.  Of course, these do seem somewhat orientated towards kext&#8217;s or similar kernel-mode code, but there is ioctl hooks that look interesting&#8230; definitely something to research.</font></p>
<p><font>But the big epiphany was the just realising that I can use the public Spotlight API in various hacky ways to do what I want&#8230; for example, list all files modified in the last 60 seconds or so, which will update constantly with all newly modified files&#8230; provided Spotlight queues </font><font face="Helvetica-Oblique"><i>all</i></font><font> prospective results, and doesn&#8217;t drop any due to time-outs or similar things.  Definitely worth investigation.</font></p>
]]></content:encoded>
					
					<wfw:commentRss>https://wadetregaskis.com/real-time-backup/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1610</post-id>	</item>
	</channel>
</rss>
