Programming

generate a react component table for users. put these columns and filters on it and these are the permissions required for users for xyz. when you click this column, a modal comes up with this form with these columns. etc etc. Boom, you just saved 2.5 hours and it only takes 15 minutes to look over the bots code.

That sounds more like days of coding unless you’ve already done something very similar or there’s a package that does all of that for you.

1 Like

linting is only one part of code review, and obviously it’s not what i’m referring to.

I was not trying to point out that you were wrong or be critical. In fact for the most part I was agreeing. I was just saying that in my opinion, linting is indeed AI code review and it does very well.

For reference, I have been reviewing code for 7 years, and 5 as an important duty. The amount of actual review that I do today is much much smaller that just a mere 5 years ago. Bc the tools are very good imo. They ofc aren’t perfect.

congratulations.

the discussion is around whether chatgpt can start doing the coding. it seems like some have convinced themselves that it can, where as linters are nowhere near that milestone. i am saying we are at the stage were specialized GPT is becoming useful to help code, but it cannot yet review and hence it is probably short of writing code.

I was not bragging. you cut off the point that I was making. it was simply to say that I have seen a drastic decrease over a fairly short amount of time in how much effort is necessary to review code bc there are tools that automate it.

what you are saying here is what I said. I am confused at the contentious disagreement.

I will refresh the conversation for you and even bold the superceding ideas.

*hot should be “how”. just for the record. this is what we call a typo.

and the 2nd *that should be “than”. I just dont want you to get confused. I will refrain from spelling out the internet colloquialisms unless unless you indicate it is necessary.

oh i got the point you were making. it just didn’t really apply to GPT discussion.

Agreed. One key point is that there’s a huge difference between a function you’re 95% confident in and one that you’re 99.99% confident in. It’s amazing if AI can solve a moderately complex programming task correctly 95% of the time, but for real engineering tasks, that’s not useful. The power of software is that if you can come up with a great solution to a problem, you can then ~costlessly use that solution for every instance of the problem. But that leads you to situations where even simple programs have dependency trees with hundreds of leaves. If all those leaves have a 5% failure rate, then your program won’t work.

That said, it does seem like AI pair programming is going to be a huge use case. It’s a great demo to tell Copilot to write a function that does X and have it magic up a solution. But in real life, it would be amazing to have a Clipper avatar on the screen, and whenever I named a variable badly, it would wander over and start looking skeptically at the assignment. Saving keystrokes is not that valuable compared to correctness and maintainability, so a tool that identifies potential issues is valuable, even if it’s wrong half the time. Just needs a good UX.

Hey kids. Anyone know how to pass arguments to a python script but also be able to redirect the output to a file? So something like this would work:

$python3 find_usage_gaps ABC_cust_accounts.txt > ABC_12_10_usage_gaps.txt

My google foo hasn’t been successful. I should just make it use an output file but I’m lazy and not really getting paid for this…

print() writes to stdout so it should work. I didn’t test with args, but that shouldn’t change anything.

$ echo "print('testing some output')" >  output.py
$ cat output.py 
print('testing some output')
$ python3 output.py
testing some output
$ python3 output.py > test.txt
$ cat test.txt 
testing some output

1 Like

If I run

$ python3 find_usage_gaps.py > ABC_12_11_usage_gaps.txt 

The output does get redirected to the file. I added arguments so I wasn’t hardcoding input filenames into the script each run and while the script runs the stdout isn’t directed anywhere.

I’m not exactly clear on the requirement, but this works as expected. Command line arguments are available inside the script and output is redirected to a file. As freddbird said, the arguments don’t really change anything.

$ cat find_usage_gaps.py 
import sys
print('Argument List:', str(sys.argv))

$ python3 find_usage_gaps.py 
Argument List: ['find_usage_gaps.py']

$ python3 find_usage_gaps.py foo bar
Argument List: ['find_usage_gaps.py', 'foo', 'bar']

$ python3 find_usage_gaps.py foo bar > test.txt

$ cat test.txt 
Argument List: ['find_usage_gaps.py', 'foo', 'bar']
1 Like

That makes me think that your output is being printed to standard error which you can redirect using 2>error txt

2 Likes

Haven’t had much time till now to look at this, since then my laptop got replaced which probably doesn’t matter but is always fun. I started using pyCharm with the laptop refresh and stepping through the code it looked like I wasn’t handling the number of parameters correctly, so I fixed that and it looks like it works, but the number of parameters was including the ‘>’ and target file which I can’t reproduce now so I don’t know why it’s working… Oh well. It also doesn’t look like “script parm > target.txt” is a very common idiom in Python, I should just change it to “script input_parm output_parm” and be done with it.

I’ve just been contacted by a former company to see if I’d be willing to do some contract work on a web-app we built in 2001. Still in production, still very popular with the users. JSP and a shit-ton of document.write() because frameworks didn’t exist. We tested it in IE 4-5.5 and Netscape 4.31!

I feel like one of those old COBOL programmers pulled out of retirement for Y2K.

Also apparently users don’t abandon your app if you don’t update your button styles to match Apple every few years. Who knew?

Hmmm

:vince2:

:vince:

Some truth in this at mega-companies.

1 Like

There is some truth, but c’mon, only worked 3 hours the entire year and the insinuation that this is what was happening at Meta/Google/Twitter seems quite unlikely.

I meant mega non-FAANGs.

But at least from HackerNews I get the impression there was some of this going on at FAANGs too. Make work basically.