Want to derail any serious discussion of programming language tools or techniques? Ask "Yeah, but does it scale?"
Sure, it's not science. It's alchemy and astrology, but you can demonstrate your world-weary superiority. Better yet, you can distract people from getting real things done.
Sometime between when I learned to program (when we counted processor speeds
in megahertz and fractions thereof) and today, the question flipped. Back in
the day, when BASIC didn't have a
SLEEP keyword and cooperative
multitasking still made sense (invoke a callback to your own joke here, but
please don't block other people from moving to the next sentence) such that you
could insert a do-nothing counting loop to delay things because even then
computers were faster than human beings, we counted cycles.
Maybe we could have solved bigger problems if we were more clever, but we spent our time trying to cram as much program as possible into as few clock cycles as possible. If that meant rewriting a loop in assembly to get both the memory count down and to take advantage of internal details of the processor we'd read in one of the copious manuals, we'd do it.
Features were important, but the rule of the day seemed to be to use limited resources to their full amount. If that meant skipping a builtin feature because you wanted to unmap the memory it took and use it for something else, that's what you did. If you could save a few bytes by taking advantage of a builtin timer instead of writing your own, you let the screen refresh rate dictate what happened.
I don't lament that loss. (I liked the challenges, but there are always challenges.) I do find the switch fascinating though. Perhaps because I'm not writing silly little games or demos anymore, because I'm writing programs that are supposed to help real users manage their information and be more productive, maybe the switch flipped in me rather than in the world.
(Then again, I did learn to program by the osmosis of typing a lot of code, changing it, and eventually learning what worked and didn't. As above, so below.
The programs I write now care more about dealing with lots of data than they do about fitting in limited computing resources. (Sometimes resource limits are still important: I've had to change algorithms more than once to make the working set of at least one project fit in available memory.) In fact, the resources I have at my disposal are so embarassingly large compared to thirty years ago that I can waste a lot of processor time and memory to avoid waiting for things like speed of light latency accessing remote resources.
I didn't see that coming.
This all comes to mind when I see discussions of programming languages, techniques, and tools. The pervasive criticism flung and intended to be stinging is often "But does it scale to large projects?"
... as if the skills needed to manage a project intended to deploy to an 8-bit microcontroller with 32kb of RAM were so similar to a CRUD application running in a web browser used at most by 35 people within a 500 person company? (As if other skills are so different!)
Put another way, I don't care if you can't figure out how to make (for the sake of argument) agile development with pervasive refactoring, coding standards, and a relentless focus on refactoring and simplicity work with a team of 80 programmers distributed across four time zones and six teams.
I don't care if you think Java or PHP is the only language in which you can hire enough warm bodies to fill your open programming reqs because you think the problem is so large you have to throw more people at it.
I don't care if you think PostgreSQL is inappropriate because it's a relational database and they're slower than NoSQL if you have to scale to 50 million hits during the Olympics when I'm profitable with a few orders of magnitude fewer users.
Your large isn't my large isn't everyone's large, and the way you scale isn't the way I scale isn't the way everyone scales.
You're not doing science. You're not measuring what works and doesn't work. You're not accounting for control variables (could you even list all of the control variables necessary to produce a valid, reproduceable experiment related to software development tools and techniques?).
Conventional wisdom says "Don't optimize until after you profile and find a valid target to optimize and a coherent way to measure the effects of your optimizations." Is it too much to ask to come up with ways to measure the ineffable second-order artifacts of software development like bug likelihood, user satisfaction, safety, reliability, and maintainability so we can measure the effects of things like static typing, automated refactoring tools, the presence and lack of higher order programming techniques, and incremental design?
Otherwise we're stuck in a world of alchemy, before the natural philosophers clawed their way to the point where a unified theory of energy and matter and motion and interaction made any sense. Maybe someday soon the smartest person in the room will answer the question "How does this work?" with "Let's try and find out!" rather than donning wizard robes and hat and waving some sort of mystical wand about wildly.