Any of you mind sharing your IT backgrounds?

RexBowie

Well-known member
Apr 25, 2023
12,828
16,761
113
I would throw the same challege back to you with a Poweshell script written by a programmer...no one would ever introduce a script into production without fully tesing in a sandbox.

...no one that values their career at least.

What AI accompishes is stremlining the development layer before it gets to test. How long would it take a programmer to write those thousand lines of code?

AI can do it in seconds.

So, in my position, none of this is a hypothetical scenario, to be clear. I am just sharing what I know from being part of various AI focus groups, working with talented engineers and ultimately speaking from the perspective of a technical stakeholder.

As stated, these are best viewed as productivity tools. I agree with you there. What I am trying to convey is that the nuances of the real world are a lot more complex than that.

You shouldn't just generate 1,000 lines of code, test it in a lower environment and walk away. That's not what a responsible person who wants to keep their job would do anyways. You need to know what's in the code, review it, analyze potential security exploits, ensure the execution routine has correct permissions in all environments, inspect the code for re-usability, assess the code's scalability, ensure the code is readable (by a human), assess the libraries used, ensure there are not wayward lines of code that simply shouldn't be there. I could list all kinds of things that are required to validate machine-written code.

Of all those things, many could even be automated to some degree. Now, how do you know which things should be automated? What do you do when these automated tests fail? Do you trust the automation? What's needed to model the automation? How do you go about triaging inevitable defects? How do you recover if things go south in production? So many questions.

Lastly, let's talk about the equilibrium of features and competition in lieu of AI. One might assume that productivity gains leads to decreased headcount. In the short term, that's possibly even true for many companies, but also, what productivity gains lead to is a higher rate of features, more feature parity, higher competition, pressure for innovation. There's less time spent on monotonous tasks and more time spent on innovative tasks and value drivers. Company B, meanwhile, has to do twice as much to keep up with Company A. Company C doesn't decrease headcount but multiplies output with performance gains. Now companies A and B need to catch C.

In a similar lens, this may not be all that dissimilar to the industrial revolution. Jobs did not decrease during the industrial revolution. They changed, for sure, but ultimately employment increased in lieu of automation. Ultimately, though, anyone who says the know for sure is lying, but this is where I stand.
 
Mar 19, 2006
1,069
653
63
You shouldn't just generate 1,000 lines of code, test it in a lower environment and walk away. That's not what a responsible person who wants to keep their job would do anyways. You need to know what's in the code, review it, analyze potential security exploits, ensure the execution routine has correct permissions in all environments, inspect the code for re-usability, assess the code's scalability, ensure the code is readable (by a human), assess the libraries used, ensure there are not wayward lines of code that simply shouldn't be there. I could list all kinds of things that are required to validate machine-written code.

I was speaking to productivity efficiency, which we have agreed is vastly superior with AI. You are now speaking (mostly) to quality control, which should not fall back on the plate of the initial programmer or AI.

Of all those things, many could even be automated to some degree. Now, how do you know which things should be automated? What do you do when these automated tests fail? Do you trust the automation? What's needed to model the automation? How do you go about triaging inevitable defects? How do you recover if things go south in production? So many questions.

Controls should always be in place to constantly monitor the tasks you mentioned. It should not matter if the reviewer is artificial or not. All of the concerns you raise would be the exact same issues stakeholders should have with their current evaulators. I would see a huge advantage to bulding an AI (separate from the programming AI, of course) that could accomplish all of these tasks. You keep referecing production, which should NEVER come into question in a true dev\test\prod development environment. I get it, stuff happens..but if it does as regularly as you appear to be referencing, the whole team should be scrapped.

Lastly, let's talk about the equilibrium of features and competition in lieu of AI. One might assume that productivity gains leads to decreased headcount. In the short term, that's possibly even true for many companies, but also, what productivity gains lead to is a higher rate of features, more feature parity, higher competition, pressure for innovation. There's less time spent on monotonous tasks and more time spent on innovative tasks and value drivers. Company B, meanwhile, has to do twice as much to keep up with Company A. Company C doesn't decrease headcount but multiplies output with performance gains. Now companies A and B need to catch C.

This all makes a lot of sense in a vaccum, which AI performs exceptionally well in. The variability of human labor introduced to that vacuum makes it less so. Company C better thread the needle on hiring or retention, or Company A is going to leave it in its wake. I would also add that innovative and value propositions are usually top down initiatives. Developers are task driven and respond to guidance. AI might not be there on the guidance piece yet, but I would not count it out in the next 5-10 years.

In a similar lens, this may not be all that dissimilar to the industrial revolution. Jobs did not decrease during the industrial revolution. They changed, for sure, but ultimately employment increased in lieu of automation. Ultimately, though, anyone who says the know for sure is lying, but this is where I stand.

I wholeheartedly agree. AI is terrifying if bad actors get involved (which they inevitably will). However, if we can put guard rails in place to minimize the exposure, this era will make the industrial revolution look like the invention of the bicycle. Our quality of life and financial positon as a country will be orders of magnitude better. It is not lying to say the opportunity is there...but it is to know how society will manage it.

Thank You, KAB. I have enjoyed the discussion.
 
  • Like
Reactions: d2atTech

NociHTTP

New member
Mar 8, 2023
9,963
15,846
0
No offense but do the certifications mean as much anymore, for most roles? Isn’t it it the experience and technology you are currently working on?
I would think the certs demonstrate that you *know things*, far more than the average person who doesn't study for the certs.

As for me:

* took my first programming course (BASIC) in high school in 1990
* learned about a language called C++ in 1995
* took first programming courses in college (Visual Basic, C, C++, UNIX shell scripting) in 1998
* learned about AOL in 1998 and started playing around with Active Server Pages
* took more programming course in college (HTML, CSS, JavaScript, Java) in 2002, along with courses
in Systems Analysis & Design
* only ended up with an AS in Computer & Information Technology in 2003
* since 2004 I have occasionally played around with different Linux distros (BSD, SuSE, Redhat, Ubuntu, Debian and now Linux Mint)
* over the years I have collected lots of programming books, in C++, Java, Python, HTML, CSS, VueJS, PHP
* my current homelab is 2 computers running Linux Mint, and the components to build a third machine that will end up being a racing simulator. In the rack is a TP-Link 16port Gigabit switch and a Raspberry Pi 4 that I have two external hard drives connected to
* I'm likely THE WORST PROCRASTINATOR on RR, compounded by being a Type 2 Diabetic who often "crashes". It's just an ugly combination, and means I usually only have a small window of opportunity where I have the energy to sit and learn something or work on a project. My current project is making a web application using PHP and SQLite3, running on Apache.
* I do have some excellent books on Linux, Security, A+ and have been meaning to study for those certs, but at my age I'm unlikely to ever work in the IT field.
 
  • Like
Reactions: d2atTech

NociHTTP

New member
Mar 8, 2023
9,963
15,846
0
I would throw the same challege back to you with a Poweshell script written by a programmer...no one would ever introduce a script into production without fully tesing in a sandbox.

...no one that values their career at least.

What AI accompishes is stremlining the development layer before it gets to test. How long would it take a programmer to write those thousand lines of code?

AI can do it in seconds.
But it's often over-complicated code, and you BETTER know what you're doing when trying to manipulate the code.
 

d2atTech

New member
Apr 15, 2009
3,477
2,578
0
I was speaking to productivity efficiency, which we have agreed is vastly superior with AI. You are now speaking (mostly) to quality control, which should not fall back on the plate of the initial programmer or AI.



Controls should always be in place to constantly monitor the tasks you mentioned. It should not matter if the reviewer is artificial or not. All of the concerns you raise would be the exact same issues stakeholders should have with their current evaulators. I would see a huge advantage to bulding an AI (separate from the programming AI, of course) that could accomplish all of these tasks. You keep referecing production, which should NEVER come into question in a true dev\test\prod development environment. I get it, stuff happens..but if it does as regularly as you appear to be referencing, the whole team should be scrapped.



This all makes a lot of sense in a vaccum, which AI performs exceptionally well in. The variability of human labor introduced to that vacuum makes it less so. Company C better thread the needle on hiring or retention, or Company A is going to leave it in its wake. I would also add that innovative and value propositions are usually top down initiatives. Developers are task driven and respond to guidance. AI might not be there on the guidance piece yet, but I would not count it out in the next 5-10 years.



I wholeheartedly agree. AI is terrifying if bad actors get involved (which they inevitably will). However, if we can put guard rails in place to minimize the exposure, this era will make the industrial revolution look like the invention of the bicycle. Our quality of life and financial positon as a country will be orders of magnitude better. It is not lying to say the opportunity is there...but it is to know how society will manage it.

Thank You, KAB. I have enjoyed the discussion.
Agreed with a lot of this.
 

d2atTech

New member
Apr 15, 2009
3,477
2,578
0
So, in my position, none of this is a hypothetical scenario, to be clear. I am just sharing what I know from being part of various AI focus groups, working with talented engineers and ultimately speaking from the perspective of a technical stakeholder.

As stated, these are best viewed as productivity tools. I agree with you there. What I am trying to convey is that the nuances of the real world are a lot more complex than that.

You shouldn't just generate 1,000 lines of code, test it in a lower environment and walk away. That's not what a responsible person who wants to keep their job would do anyways. You need to know what's in the code, review it, analyze potential security exploits, ensure the execution routine has correct permissions in all environments, inspect the code for re-usability, assess the code's scalability, ensure the code is readable (by a human), assess the libraries used, ensure there are not wayward lines of code that simply shouldn't be there. I could list all kinds of things that are required to validate machine-written code.

Of all those things, many could even be automated to some degree. Now, how do you know which things should be automated? What do you do when these automated tests fail? Do you trust the automation? What's needed to model the automation? How do you go about triaging inevitable defects? How do you recover if things go south in production? So many questions.

Lastly, let's talk about the equilibrium of features and competition in lieu of AI. One might assume that productivity gains leads to decreased headcount. In the short term, that's possibly even true for many companies, but also, what productivity gains lead to is a higher rate of features, more feature parity, higher competition, pressure for innovation. There's less time spent on monotonous tasks and more time spent on innovative tasks and value drivers. Company B, meanwhile, has to do twice as much to keep up with Company A. Company C doesn't decrease headcount but multiplies output with performance gains. Now companies A and B need to catch C.

In a similar lens, this may not be all that dissimilar to the industrial revolution. Jobs did not decrease during the industrial revolution. They changed, for sure, but ultimately employment increased in lieu of automation. Ultimately, though, anyone who says the know for sure is lying, but this is where I stand.

Man, this is one of the best posts on any platform I've ever read. Thank you so much for writing it.

Out of curiosity, are you linked up with the folks at google, openAI, or anthropic? By the way you write, I just assumed you were.
 
  • Like
Reactions: RexBowie

RexBowie

Well-known member
Apr 25, 2023
12,828
16,761
113
Man, this is one of the best posts on any platform I've ever read. Thank you so much for writing it.

Out of curiosity, are you linked up with the folks at google, openAI, or anthropic? By the way you write, I just assumed you were.

Not directly. Have talked with some of them (as a 3rd party), but my involvement with that is more in the area of dev AI code generation and tooling that helps with that. There's kind of two main AI buckets when it comes to SW engineering right now. You have the area where your product is implementing AI models, agents, etc. (and the skillsets that requires from devs) then you have the area where you're using AI to enhance productivity. Both are super important to track right now if you're in SW engineering.
 

RexBowie

Well-known member
Apr 25, 2023
12,828
16,761
113
I was speaking to productivity efficiency, which we have agreed is vastly superior with AI. You are now speaking (mostly) to quality control, which should not fall back on the plate of the initial programmer or AI.



Controls should always be in place to constantly monitor the tasks you mentioned. It should not matter if the reviewer is artificial or not. All of the concerns you raise would be the exact same issues stakeholders should have with their current evaulators. I would see a huge advantage to bulding an AI (separate from the programming AI, of course) that could accomplish all of these tasks. You keep referecing production, which should NEVER come into question in a true dev\test\prod development environment. I get it, stuff happens..but if it does as regularly as you appear to be referencing, the whole team should be scrapped.



This all makes a lot of sense in a vaccum, which AI performs exceptionally well in. The variability of human labor introduced to that vacuum makes it less so. Company C better thread the needle on hiring or retention, or Company A is going to leave it in its wake. I would also add that innovative and value propositions are usually top down initiatives. Developers are task driven and respond to guidance. AI might not be there on the guidance piece yet, but I would not count it out in the next 5-10 years.



I wholeheartedly agree. AI is terrifying if bad actors get involved (which they inevitably will). However, if we can put guard rails in place to minimize the exposure, this era will make the industrial revolution look like the invention of the bicycle. Our quality of life and financial positon as a country will be orders of magnitude better. It is not lying to say the opportunity is there...but it is to know how society will manage it.

Thank You, KAB. I have enjoyed the discussion.


You're triggering me on 2 points. Don't necessarily agree with the rest.

Your first comment stands in the way of everything agile. Quality control should not just be shipped to QA w/ out proper developer implementation. That's a smell of a bad SDLC and bound to create a higher rate of bottlenecks. Automated testing should first fall on shoulders of engineers always. This is even more true even in an AI paradigm. I'd even argue that QA is put more at-risk than any other sub-category of software development. If devs complete items earlier, it gives them more time to introduce their own automated test suites (which in actuality agile / XP orgs have been doing for years already).

I understand your point in terms of environments, but you simply can't ignore production. Ultimately, code is worthless unless it's delivered. The reason I mention production is because ultimately how you impact the end-user is paramount. You fail there, you lose money. It's not a trivial thing. As long as there is uncertainty that machine written code is accurate, scalable, secure, etc. there is always a need for a human to drive it & validate it.
 

d2atTech

New member
Apr 15, 2009
3,477
2,578
0
Not directly. Have talked with some of them (as a 3rd party), but my involvement with that is more in the area of dev AI code generation and tooling that helps with that. There's kind of two main AI buckets when it comes to SW engineering right now. You have the area where your product is implementing AI models, agents, etc. (and the skillsets that requires from devs) then you have the area where you're using AI to enhance productivity. Both are super important to track right now if you're in SW engineering.
i should connect you if you are interested. i think the folks ought to be talking with people like you anyway.
 
  • Like
Reactions: RexBowie