goreporter | Golang tool that does static analysis | Code Analyzer library
kandi X-RAY | goreporter Summary
Support
Quality
Security
License
Reuse
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here
goreporter Key Features
goreporter Examples and Code Snippets
Trending Discussions on Code Analyzer
Trending Discussions on Code Analyzer
QUESTION
Firt of all, I am a total begginner in Rust, I started to use a code analyzer (Mega-Linter) and it made me realize how much I duplicated the same "use" statements in my submodules. Here what my source file tree looks like :
src/
- lib.rs
- ui/
- mod.rs
- ui_mod_1.rs
- ui_mod_2.rs
Then I realized that my ui_mod_1.rs and ui_mod_2.rs had almost the same bunch of "use" statements :
// ui_mod_1.rs
use tui::{
layout::{Alignment, Constraint, Direction, Layout, Rect},
style::{Color, Modifier, Style},
};
// rest of the file
// ui_mod_2.rs
use tui::{
layout::{Alignment, Constraint, Direction, Layout, Rect},
style::{Color, Modifier, Style},
};
// rest of the file
// mod.rs
pub mod ui_mod_1;
pub mod ui_mod_2;
// lib.rs
pub mod ui;
The idea behind ui_mod_1.rs and ui_mod_2.rs is to split "ui utilitaries" functions by theme to avoid having a huge source file containing all of them. A possible solution is to merge the two files, but this is not what I want to do.
What I tried is to move the "use" that the two submodules have in common in the mod.rs or even in the lib.rs like so :
// mod.rs
pub use tui::{
layout::{Alignment, Constraint, Direction, Layout, Rect},
style::{Color, Modifier, Style},
};
pub mod ui_mod_1;
pub mod ui_mod_2;
But this does not work. After some research I still did not find how to do this. Is there an elegant way to regroup "use" statements for all submodules ?
ANSWER
Answered 2022-Mar-23 at 13:43You can create a ui_prelude
module that contains the use statements as pub use
, and then do just use ui_prelude::*
in your modules:
// ui_prelude.rs
pub use tui::{
layout::{Alignment, Constraint, Direction, Layout, Rect},
style::{Color, Modifier, Style},
};
// ui_mod_1.rs and ui_mod_2.rs
use super::ui_prelude::*;
// mod.rs
mod ui_prelude.
pub mod ui_mod_1;
pub mod ui_mod_2;
QUESTION
I've got a Roslyn based Code Analyzer and Codefix. When directly creating the ReportDiagnostic from an AnalyzerCodeBlock, they would show up in live analysis (Problems in Jetbrains Rider).
However, it needs to parse additional data from the solution to build a dependency tree to make the decision. So now it works like this:
RegisterCompilationStartAction -> then it registers a RegisterCodeBlockStartAction to build a dependency tree
RegisterOperationAction -> Instead of generating the ReportDiagnostic directly, it puts the particular calls into a ConcurrentBag to analyze later.
RegisterCompilationEndAction -> When called, this analyzes the calls from RegisterOperationAction with the dependency tree generated in the RegisterCodeBlockStartAction and generates ReportDiagnostics with the combined information.
Now it only works on build, not in live analysis. I would love to get this back working in live analysis (I have enable solution-wide analysis enabled) since allowing use of the codefixes are incredibly useful.
Any idea of a known reason (like using any CompilationStart-End) this automatically doesn't work in live mode, or is there a way to refactor this into a different structure compatible with live analysis?
ANSWER
Answered 2022-Mar-18 at 12:46CompilationStart isn't a problem. It doesn't cause an analyzer to be build-only. However, CompilationEnd is the problem. They're build only, and also their associated code fixes won't show in the IDE. This is for performance reasons.
Related discussion: https://github.com/dotnet/roslyn/issues/51653
QUESTION
I'm trying to do code analyzer app and i have a txt file that contains a python code and my goal now is to save all functions in this txt file in dictionary in the class, but i don't have any idea how can i do it
at first i create class that name is class CodeAnalyzer:
def __init__(self, file):
self.file = file
self.file_string = ""
self.file_func= {}
self.errors = {}
and i want to save function in self.file_func= {}
this is process step, every method should return key and value added to attributes
def process_file(self):
for i, line in enumerate(self.file):
self.file_string += line
self.check_divide_by_zero(i, line)
self.check_parameters_num(i, line)
This what i tried to do but ie's failed :
def store_function(self,i,line):
if(line.startswith('def')):
self.file_func.setdefault(i,[]).append((self.file_string[file_string.index(':') ,:]))
Any one have an Idea or help on it ?
ANSWER
Answered 2022-Mar-07 at 01:31You can just use exec()
with it's globals() dict set to your class instance's namespace.
class CodeAnalyzer:
def __init__(self,file):
# Read a string from the file
f=open(file)
t=f.read()
f.close()
#Populate the namespace dictionary by executing the code in the file
self.namespace={}#this includes all declarations of functions, variables and classes
exec(t,self.namespace)#this means that when variables are declared, they use the class instance's attributes dictionary as their global namespace
#Filter the namespace dict based on the contents
self.functions={i:self.namespace[i] for i in self.namespace if isinstance(i,type(lambda:0))}#type(lambda:0) is just FunctionType
self.classes={i:self.namespace[i] for i in self.namespace if isinstance(i,type)}#all classes are instances of type
self.variables={i:self.namespace[i] for i in self.namespace if i not in self.functions|self.classes}#everything else, using dictionary merge
Feel free to comment on this answer if you have further questions.
QUESTION
I'm sure that question has been asked numerous times but I can't seem to find a good/satisfying answer so please bear with me.
Using PHP 7.4+, I tend to type everything I can. But I have some problems with Doctrine entities properties.
If I type everything correctly, I usually get a lot of errors like this one.
Typed property App\Entity\User::$createdAt must not be accessed before initialization
A code sample for that type of error would look something like this
/**
* @var DateTimeInterface
* @ORM\Column(type="datetime")
*/
protected DateTimeInterface $createdAt;
So, I used to make the property nullable even though the database field is not. So it would look something like this.
/**
* @var DateTimeInterface|null
* @ORM\Column(type="datetime")
*/
protected ?DateTimeInterface $createdAt = null;
But, now I have another problem. I decided to implement a static code analyzer in my project and now I'm using PHPStan. So now, when I scan my code I get errors like that one.
Line src/Entity/Trait/TimestampableEntityPropertiesTrait.php (in context of class App\Entity\Article)
16 Property App\Entity\Article::$createdAt type mapping mismatch: property can contain DateTimeInterface|null but database expects DateTimeInterface.
So, what would be the right way to handle this type of situation?
Any advice would be greatly appreciated.
EDIT
I should have mentioned that sometimes, I don't want to/can't initialize the property in the constructor since I don't have the correct values just yet.
ANSWER
Answered 2022-Feb-09 at 14:23I'm not sure if this is a bad practice, but it turned out I only had to remove that check from phpstan configuration.
# phpstan.neon
parameters:
doctrine:
allowNullablePropertyForRequiredField: true
EDIT:
After some digging, I realized I should be using a DTO which would allow a null value, and then transfer it to my entity once ready (and valid). This way, my entity is always valid and I do not risk flushing some invalid data in the DB.
QUESTION
In developing a Microsoft Word Online add-in, my team needs to detect focus being gained/regained by the document (ETA: to trigger other functionality which depends on this knowledge). It appears that Microsoft has tightly locked down scriptability in this context--all window.on* functions are replaced by null, all error-handling code is deeply obfuscated, etc. Our efforts so far have been frustrated.
Simply setting window.onfocus to a new function causes the add-in to not load correctly, likely because it's triggering a code analyzer as unsafe, but hard to tell.
There is also nothing in the Microsoft Word Online JavaScript API which directly provides this functionality. Scripts can detect when the document selection has changed easily with a provided method, but that seems to be about it for documented functionality in this area. (Obviously simply sensing document changes will not work.)
What's the best approach to sensing document and/or window focus in this situation? Thank you.
ANSWER
Answered 2021-Nov-30 at 10:26The document.onvisibilitychange event can be used as a rough approximation of the required functionality.
document.onvisibilitychange = (ev) => {
if (document.visibilityState == "visible") {
// Handle pseudo-focus event
}
else {
// Handle pseudo-blur event
}
};
This may be combined as desired with with the Office Online API DocumentSelectionChanged event to refine further to sense when the cursor is placed within the Word document. (That is, fire focus-gained logic only when the Office DocumentSelectionChanged event is fired the first time after the browser document.onvisibilitychange event fired with document.visibilityState equal to "visible".)
QUESTION
Newbie question, I've just switched from Visual Studio to Rider, so I'm still trying to get my bearings.
Trying to use the code analyzers and see the suggestions for the entire solution.
The errors/warnings I can see in the 'Errors In Solution' window but the suggestions are not listed there. Can I add them to that list somehow?, or is there a different window?
Edit: It's not just the Roslyn analyzers, for example a spelling mistake shows up highlighted in the source as as 'suggestion'.
When opening the 'Errors in Solution' I would have expected those to also be there but they aren't.
ANSWER
Answered 2021-Nov-08 at 11:42QUESTION
I'm writing a code analyzer. My analyzer uses Webpack's JavaScriptParser
hooks. I need to output an error message, but the line number from node.loc
is off because a loader has transformed the source code. So I want to feed the error message through a source map before logging it.
class FooPlugin {
apply(compiler) {
compiler.hooks.normalModuleFactory.tap("FooPlugin", factory => {
factory.hooks.parser
.for('javascript/auto')
.tap("FooPlugin", parser => {
parser.hooks.call.for("foo").tap("FooPlugin", expr => {
const map = getSourceMapSomehow(); /* ??? */
const originalLine = map.originalPositionFor(expr.loc.start).line;
console.log("foo() call found at line " + originalLine);
});
});
});
}
}
I can't figure out how to fill in getSourceMapSomehow()
in the example above. How can I get the source map for the current module inside a JavaScriptParser
hook?
ANSWER
Answered 2021-Oct-29 at 15:06I figured it out by reading the Webpack source code. The function I needed was module.originalSource()
.
const map = new SourceMapConsumer(parser.state.module.originalSource().map());
const originalLine = map.originalPositionFor(expr.loc.start).line;
console.log("foo() call found at line " + originalLine);
QUESTION
In the Bismon static source code analyzer (GPLv3+ licensed, git commit 49dd1bd232854a
) for embedded C and C++ code (using a plugin for GCC 10 straight compiler on Debian bookworm for x86-64) I have a test Bash script Hello-World-Analyze
which uses a GNU array variable bismon_hello_args
.
That variable is declared (at line 56) using:
declare -a bismon_hello_args
and I would like to fill that bismon_hello_args
array variable from script arguments starting with --bismon
, and later invoke the bismon
executable (compiled from C source files) with several arguments to its main
being the elements of that bismon_hello_args
array variable.
So if my Hello-World-Analyze
script is invoked as Hello-World-Analyze --bismon-debug-after-load --bismon-anon-web-cookie=/tmp/bismoncookie --gcc=/usr/local/bin/gcc-11
I want the bismon
ELF executable to be started with two arguments (so argc=3, in C parlance) : --bismon-debug-after-load
followed by --bismon-anon-web-cookie=/tmp/bismoncookie
For some reason, the following code (lines 58 to 64) in that Hello-World-Analyze
script:
for f in "$@"; do
case "$f" in
--bismon*) bismon_hello_args+=$f;;
--asm) export BISMON_PLUGIN_ASMOUT=/tmp/gcc10_metaplugin_BMGCC.s;;
--gcc=*) export BISMON_GCC=$(echo $f | /bin/sed -e s/--gcc=//);;
esac
done
does not work as expected. It should be (and was in a previous git commit e8c3d795bc9dc8
) later followed with
./bismon $bismon_hello_args &
But debugging prints show that bismon
is invoked with argc=2
so one long argv[1]
program argument...
What am I doing wrong?
ANSWER
Answered 2021-Sep-20 at 08:39Merely +=
adds a string to an existing string. You probably want bismon_hello_args+=("$f");;
(notice also the quotes). To call the program, use ./bismon "${bismon_hello_args[@]}" &
(notice the quotes, again).
The syntax to use an array variable is different than the syntax for simple scalars. This syntax was inherited from ksh
, which in turn needed to find a way to introduce new behavior without sacrificing compatibility with existing Bourne shell scripts.
Without the array modifiers, Bash simply accesses the first element of the array. (This confuses beginners and experienced practitioners alike.)
QUESTION
I have a C# roslyn code analyzer that needs to analyze the usage scenarios of generic method invocations of a given class. I am gathering all the references to the method, the generic type parameters and so forth and then want to invoke the methods (via reflection) to analyze the output to report potential diagnostics in the analyzer. Is there a way from a Roslyn-Compilation.Assembly to a System.Reflection.Assembly? If not, is there any other way?
The Analyzer project and the solution to be analyzed are under my control.
Thanks!
ANSWER
Answered 2021-Aug-30 at 18:04You can't do this: when your analyzer is running we haven't actually built the assembly yet. Furthermore, there's no guarantee your built thing can actually run. If I'm using a Windows machine to say build a project that only runs on Linux...that won't work well.
QUESTION
I need to parse URI-like string. This URI is specific to the project and corresponds to "scheme://path/to/file
", where path should be a syntactically correct path to file from filesystem point of view. For this purpose std::regex
was used with pattern R"(^(r[o|w])\:\/\/(((?!\$|\~|\.{2,}|\/$).)+)$)"
.
It works fine but code analyzer complies that it is not compliant as $
character is not belong to the C++ Language Standard basic source character set:
AUTOSAR C++14 A2-3-1 (Required) Only those characters specified in the C++ Language Standard basic source character set shall be used in the source code.
Exception to this rule (according to Autosar Guidelines):
It is permitted to use other characters inside the text of a wide string and a UTF-8 encoded string literal.
wchar_t
is prohibited by other rule, although it works with UTF-8 string
(but it looks ugly and unreadable in the code, also I'm afraid it is not safe).
Could someone help me with workaround or std::regex
here is not the best solution, then what would be better?
Are any other drawbacks of using UTF-8 string literal?
P.S. I need $
to be sure (on parsing phase) that path is not a directory and that it is not contain none of /../
, ~
, $
, so I can't just skip it.
ANSWER
Answered 2021-Aug-05 at 17:28I feel like making the code worse for the sake of satisfying an analyser is counterproductive and most likely violates the spirit of the guidelines, so I'm intentionally ignoring ways to address the problem that would involve building the regex string in a convoluted manner, since what you did is the best way to build such a regex string.
Could someone help me with workaround or std::regex here is not the best solution, then what would be better?
Option A: Write a simple validation function:
I'm actually surprised that such strict guidelines even allow regexes in the first place. They are notoriously hard to audit, debug, and maintain.
You could easily express the same logic with actual code, which would not only satisfy the analyser, but be more aligned with the spirit of the guidelines. On top of that it'll compile faster and probably run faster as well.
Something along these rough lines, based on a cursory reading of your regex. (please don't just use this without running it through a battery of tests, I sure didn't):
bool check_and_remove_path_prefix(std::string_view& path) {
constexpr std::array valid_prefixes = {
R"(ro://)",
R"(rw://)"
};
for(auto p: valid_prefixes) {
if(path.starts_with(p)) {
path.remove_prefix(p.size());
return true;
}
}
return false;
}
bool is_valid_path_elem_char(char c) {
// This matches your regex, but is probably wrong, as it will accept a bunch of control characters.
// N.B. \x24 is the dollar sign character
return c != '~' && c != '\x24' && c != '\r' && c != '\n';
}
bool is_valid_path(std::string_view path) {
if(!check_and_remove_path_prefix(path)) { return false; }
char prev_c = '\0';
bool current_segment_empty = true;
for(char c : path) {
// Disallow two or more consecutive periods
if( c == '.' && prev_c == '.') { return false; }
// Disallow empty segments
if(c == '/') {
if(current_segment_empty) { return false; }
current_segment_empty = true;
}
else {
if(!is_valid_path_elem_char(c)) { return false; }
current_segment_empty = false;
}
prev_c = c;
}
return !current_segment_empty;
}
Option B: Don't bother with the check
It's hard from our point of view to determine whether that option is in the cards or not for you, but for every intent and purpose, the distinction between a badly formed path and a well-formed path that does not point to a valid file is moot.
So just use the path as if it's valid, you should be handling the errors that would result from a badly formed path anyways.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install goreporter
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page