hadoop - Load data from Cassandra -


i using cassandra 1.2.12 , want load data cassandra using java code, forced use limit in query.

using datastax api fetch data cassandra.

lets assume keyspace 'k' , columnfamily 'c', read data c on condition results in 10 million records, since getting time-out exception limited 10000, , i know cant limit 10001 20000.... , want load full 10 million records, how can solve problem.?

what you're asking called pagination, , you'll have write queries where key > [some_value] set starting boundary each slice want return. correct value use, you'll need @ last row returned previous slice.

if you're not dealing numbers, can use function token() range check, example:

select * c token(name) > token('bob') 

token() may required if you're paging partition key, disallows slicing queries. example (adapted datastax documentation):

create table c (   k int primary key,   v1 int,   v2 int );  select * c token(k) > token(42); 

Comments

Popular posts from this blog

android - Get AccessToken using signpost OAuth without opening a browser (Two legged Oauth) -

org.mockito.exceptions.misusing.InvalidUseOfMatchersException: mockito -

google shop client API returns 400 bad request error while adding an item -