Guidelines Must be Updated

Quick context for those who don’t know the particular case that triggered this discussion: please read: [Announcement] rplidar_ros2_driver: A Modern C++ (C++17) Driver with Lifecycle Support .

I’m glad this discussion is started and that it doesn’t continue in the original thread, because it started destroying the original topic announcing a new driver. Maybe the mods could even move a few posts here.

However, I’m flagging your latest two posts as inappropriate, @Orhan . In my view, you’re doing the exact thing you’re reporting - ad hominem attacks. Big guy, bad guy, good guy, bully, academcian, hobbyist youtuber etc sound all like name calling or ad hominem attacks to me. Please, stop with it. Otherwise, it will waste and destroy this potentially fruitful discussion.

I’m also quite sad you used LLM to form and base your arguments (although I’m very happy you announced its usage). Because LLMs are very good at saying what you want to hear regardless of truth or facts. I think your usage here led exactly to this. My post did not attack the original author’s personality. I was commenting on his particular behavior, which is definitely not an ad hominem attack. An ad hominem attack would be “you’re a dishonest person “ or “you’re dishonest” (without any further specification like “regarding this or that”). Me saying “you maybe don’t even understand your codebase” was definitely not about the competence of the author (which I don’t know), but it was relating to the fact that the code is probably largely generated and the author thus does not need to understand it to “write” it. I hope this can be clearly understood by anyone reading the whole thread. If not, maybe there’s a communication problem. Without any “proof”, this statement could be seen as an attack, but given the fact there was an obvious and verifiable discrepancy between what the author announced and what was in the repo, I still think this statement from me was acceptable (at the time of writing). I agree it could be more welcoming by pointing out the particular steps the author could do, but I try to not get caught by the LLM flood. If I see LLM output, I don’t want to spend too much time (genuinely) reacting to it, because generating the thing took way less time than me thinking and writing. That’s the reason why my post was not very welcoming.

As anyone can see, the original author ingested the actionable feedback contained in the reactions and added notices to the repo about LLM usage as well as he explained that the discrepancy I used as “proof” was actually a mistake. Great, mistakes happen, problem explained!

I’m not going to edit my post unless more people convince me it was already behind the border (currently, I see it as borderline, but above the bar). However, adding an edit to it pointing to the fact that things have moved since I wrote it is a good idea. I’ll add an amendment to the post as soon as I finish replying here.

Regarding the post of @martincerven , I think it is even closer to the borderline than my post. I think whether it goes above or below the bar depends a lot on how each individual understands “AI slop”. I, for example, don’t understand it as an insult of the author, but as an insult aimed at how (bad) LLMs work at this time in some cases. But other people may feel different about this not yet very well established term. However, because of this “AI slop” problem, authors should still check the LLM outputs and own the responsibility for the generated (and corrected) output. What gets this post above the bar is the list of particular things that Martin finds confusing. What is below the bar for me is the last sentence “Now OP will say he used GPT (to make 20 page of slop blog right?) because he’s not native speaker.” .

This is totally unacceptable and a complete misunderstanding of the place where we are (ROS Discourse). I can’t understand why you wrote it. I don’t do anything “for OSRA” (except paying for the cheapest membership). I do all my related work for the ROS community. For myself, for my team, my colleagues, for other fellow roboticists and researchers and for great companies with great ideas. OSRA is an entity that makes all this effort possible and worth it.

6 Likes