Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CSVStreamInstrument can cause upstream sources to buffer data #890

Open
Copper280z opened this issue Sep 13, 2024 · 0 comments
Open

CSVStreamInstrument can cause upstream sources to buffer data #890

Copper280z opened this issue Sep 13, 2024 · 0 comments

Comments

@Copper280z
Copy link

In the case where a source sends data faster than CSVStreamInstrument can consume it some sources will buffer data instead of dropping it. A good example is orbcat piped to netcat.

orbcat -c 1,"CSV-DATA,%d," -c 2,"%d\r\n" |& nc -k -l localhost 2345

This causes the data shown to become stale, potentially very quickly if the source can supply a lot of data.

I think the expected behavior is that scopehal will read all the data available, then use the last valid CSV- line (of each type, data, unit, and name?) for the update.

A simple solution is to use the first valid line in the buffer, then flush the rest, though this introduces a 1 update delay and sort of breaks the concept of "most recent data point". Next best might be to read the whole buffer, then parse through it, but this seems like it isn't going to be made easy by the code that shows up in several ReadReply implementations.

	//FIXME: there *has* to be a more efficient way to do this...
	char tmp = ' ';
	string ret;
	while(true)
	{
		if(!m_socket.RecvLooped((unsigned char*)&tmp, 1))
			break;
		if( (tmp == '\n') || ( (tmp == ';') && endOnSemicolon ) )
			break;
		else
			ret += tmp;
	}

I tried using a simple while loop to empty the buffer at the instrument level, but didn't have much success. I think data was coming in faster than it could be read out byte by byte, which caused the loop to hang. The flush strategy worked fine, with the aforementioned caveats. There might be a decent way with ReadRawData, but I don't think you can get the length of bytes received for a read that timed out, so it may not be any better.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant