Class DataLakeFileAsyncClient

java.lang.Object
com.azure.storage.file.datalake.DataLakePathAsyncClient
com.azure.storage.file.datalake.DataLakeFileAsyncClient

public class DataLakeFileAsyncClient extends DataLakePathAsyncClient
This class provides a client that contains file operations for Azure Storage Data Lake. Operations provided by this client include creating a file, deleting a file, renaming a file, setting metadata and http headers, setting and retrieving access control, getting properties, reading a file, and appending and flushing data to write to a file.

This client is instantiated through DataLakePathClientBuilder or retrieved via DataLakeFileSystemAsyncClient.getFileAsyncClient(String).

Please refer to the Azure Docs for more information.

  • Method Details

    • getFileUrl

      public String getFileUrl()
      Gets the URL of the file represented by this client on the Data Lake service.
      Returns:
      the URL.
    • getFilePath

      public String getFilePath()
      Gets the path of this file, not including the name of the resource itself.
      Returns:
      The path of the file.
    • getFileName

      public String getFileName()
      Gets the name of this file, not including its full path.
      Returns:
      The name of the file.
    • getCustomerProvidedKeyAsyncClient

      public DataLakeFileAsyncClient getCustomerProvidedKeyAsyncClient(CustomerProvidedKey customerProvidedKey)
      Creates a new DataLakeFileAsyncClient with the specified customerProvidedKey.
      Overrides:
      getCustomerProvidedKeyAsyncClient in class DataLakePathAsyncClient
      Parameters:
      customerProvidedKey - the CustomerProvidedKey for the file, pass null to use no customer provided key.
      Returns:
      a DataLakeFileAsyncClient with the specified customerProvidedKey.
    • delete

      public Mono<Void> delete()
      Deletes a file.

      Code Samples

       client.delete().subscribe(response ->
           System.out.println("Delete request completed"));
       

      For more information see the Azure Docs

      Returns:
      A reactive response signalling completion.
    • deleteWithResponse

      public Mono<Response<Void>> deleteWithResponse(DataLakeRequestConditions requestConditions)
      Deletes a file.

      Code Samples

       DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
           .setLeaseId(leaseId);
      
       client.deleteWithResponse(requestConditions)
           .subscribe(response -> System.out.println("Delete request completed"));
       

      For more information see the Azure Docs

      Parameters:
      requestConditions - DataLakeRequestConditions
      Returns:
      A reactive response signalling completion.
    • deleteIfExists

      public Mono<Boolean> deleteIfExists()
      Deletes a file if it exists.

      Code Samples

       client.deleteIfExists().subscribe(deleted -> {
           if (deleted) {
               System.out.println("Successfully deleted.");
           } else {
               System.out.println("Does not exist.");
           }
       });
       

      For more information see the Azure Docs

      Overrides:
      deleteIfExists in class DataLakePathAsyncClient
      Returns:
      a reactive response signaling completion. true indicates that the file was successfully deleted, false indicates that the file did not exist.
    • deleteIfExistsWithResponse

      public Mono<Response<Boolean>> deleteIfExistsWithResponse(DataLakePathDeleteOptions options)
      Deletes a file if it exists.

      Code Samples

       DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
           .setLeaseId(leaseId);
       DataLakePathDeleteOptions options = new DataLakePathDeleteOptions().setIsRecursive(false)
           .setRequestConditions(requestConditions);
      
       client.deleteIfExistsWithResponse(options).subscribe(response -> {
           if (response.getStatusCode() == 404) {
               System.out.println("Does not exist.");
           } else {
               System.out.println("successfully deleted.");
           }
       });
       

      For more information see the Azure Docs

      Overrides:
      deleteIfExistsWithResponse in class DataLakePathAsyncClient
      Parameters:
      options - DataLakePathDeleteOptions
      Returns:
      A reactive response signaling completion. If Response's status code is 200, the file was successfully deleted. If status code is 404, the file does not exist.
    • upload

      public Mono<PathInfo> upload(Flux<ByteBuffer> data, ParallelTransferOptions parallelTransferOptions)
      Creates a new file and uploads content.

      Code Samples

       client.uploadFromFile(filePath)
           .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
           .subscribe(completion -> System.out.println("Upload from file succeeded"));
       
      Parameters:
      data - The data to write to the file. Unlike other upload methods, this method does not require that the Flux be replayable. In other words, it does not have to support multiple subscribers and is not expected to produce the same values across subscriptions.
      parallelTransferOptions - ParallelTransferOptions used to configure buffered uploading.
      Returns:
      A reactive response containing the information of the uploaded file.
    • upload

      public Mono<PathInfo> upload(Flux<ByteBuffer> data, ParallelTransferOptions parallelTransferOptions, boolean overwrite)
      Creates a new file and uploads content.

      Code Samples

       boolean overwrite = false; // Default behavior
       client.uploadFromFile(filePath, overwrite)
           .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
           .subscribe(completion -> System.out.println("Upload from file succeeded"));
       
      Parameters:
      data - The data to write to the file. Unlike other upload methods, this method does not require that the Flux be replayable. In other words, it does not have to support multiple subscribers and is not expected to produce the same values across subscriptions.
      parallelTransferOptions - ParallelTransferOptions used to configure buffered uploading.
      overwrite - Whether to overwrite, should the file already exist.
      Returns:
      A reactive response containing the information of the uploaded file.
    • uploadWithResponse

      public Mono<Response<PathInfo>> uploadWithResponse(Flux<ByteBuffer> data, ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map<String,String> metadata, DataLakeRequestConditions requestConditions)
      Creates a new file. To avoid overwriting, pass "*" to DataLakeRequestConditions.setIfNoneMatch(String).

      Code Samples

       PathHttpHeaders headers = new PathHttpHeaders()
           .setContentMd5("data".getBytes(StandardCharsets.UTF_8))
           .setContentLanguage("en-US")
           .setContentType("binary");
      
       Map<String, String> metadata = Collections.singletonMap("metadata", "value");
       DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
           .setLeaseId(leaseId)
           .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
       Long blockSize = 100L * 1024L * 1024L; // 100 MB;
       ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize);
      
       client.uploadWithResponse(data, parallelTransferOptions, headers, metadata, requestConditions)
           .subscribe(response -> System.out.println("Uploaded file %n"));
       

      Using Progress Reporting

       PathHttpHeaders httpHeaders = new PathHttpHeaders()
           .setContentMd5("data".getBytes(StandardCharsets.UTF_8))
           .setContentLanguage("en-US")
           .setContentType("binary");
      
       Map<String, String> metadataMap = Collections.singletonMap("metadata", "value");
       DataLakeRequestConditions conditions = new DataLakeRequestConditions()
           .setLeaseId(leaseId)
           .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
       ParallelTransferOptions pto = new ParallelTransferOptions()
           .setBlockSizeLong(blockSize)
           .setProgressListener(bytesTransferred -> System.out.printf("Upload progress: %s bytes sent", bytesTransferred));
      
       client.uploadWithResponse(data, pto, httpHeaders, metadataMap, conditions)
           .subscribe(response -> System.out.println("Uploaded file %n"));
       
      Parameters:
      data - The data to write to the file. Unlike other upload methods, this method does not require that the Flux be replayable. In other words, it does not have to support multiple subscribers and is not expected to produce the same values across subscriptions.
      parallelTransferOptions - ParallelTransferOptions used to configure buffered uploading.
      headers - PathHttpHeaders
      metadata - Metadata to associate with the resource. If there is leading or trailing whitespace in any metadata key or value, it must be removed or encoded.
      requestConditions - DataLakeRequestConditions
      Returns:
      A reactive response containing the information of the uploaded file.
    • uploadWithResponse

      public Mono<Response<PathInfo>> uploadWithResponse(FileParallelUploadOptions options)
      Creates a new file.

      To avoid overwriting, pass "*" to DataLakeRequestConditions.setIfNoneMatch(String).

      Code Samples

       PathHttpHeaders headers = new PathHttpHeaders()
           .setContentMd5("data".getBytes(StandardCharsets.UTF_8))
           .setContentLanguage("en-US")
           .setContentType("binary");
      
       Map<String, String> metadata = Collections.singletonMap("metadata", "value");
       DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
           .setLeaseId(leaseId)
           .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
       Long blockSize = 100L * 1024L * 1024L; // 100 MB;
       ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize);
      
       client.uploadWithResponse(new FileParallelUploadOptions(data)
           .setParallelTransferOptions(parallelTransferOptions).setHeaders(headers)
           .setMetadata(metadata).setRequestConditions(requestConditions)
           .setPermissions("permissions").setUmask("umask"))
           .subscribe(response -> System.out.println("Uploaded file %n"));
       

      Using Progress Reporting

       PathHttpHeaders httpHeaders = new PathHttpHeaders()
           .setContentMd5("data".getBytes(StandardCharsets.UTF_8))
           .setContentLanguage("en-US")
           .setContentType("binary");
      
       Map<String, String> metadataMap = Collections.singletonMap("metadata", "value");
       DataLakeRequestConditions conditions = new DataLakeRequestConditions()
           .setLeaseId(leaseId)
           .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
       ParallelTransferOptions pto = new ParallelTransferOptions()
           .setBlockSizeLong(blockSize)
           .setProgressListener(bytesTransferred -> System.out.printf("Upload progress: %s bytes sent", bytesTransferred));
      
       client.uploadWithResponse(new FileParallelUploadOptions(data)
           .setParallelTransferOptions(parallelTransferOptions).setHeaders(headers)
           .setMetadata(metadata).setRequestConditions(requestConditions)
           .setPermissions("permissions").setUmask("umask"))
           .subscribe(response -> System.out.println("Uploaded file %n"));
       
      Parameters:
      options - FileParallelUploadOptions
      Returns:
      A reactive response containing the information of the uploaded file.
    • uploadFromFile

      public Mono<Void> uploadFromFile(String filePath)
      Creates a new file, with the content of the specified file. By default, this method will not overwrite an existing file.

      Code Samples

       client.uploadFromFile(filePath)
           .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
           .subscribe(completion -> System.out.println("Upload from file succeeded"));
       
      Parameters:
      filePath - Path to the upload file
      Returns:
      An empty response
      Throws:
      UncheckedIOException - If an I/O error occurs
    • uploadFromFile

      public Mono<Void> uploadFromFile(String filePath, boolean overwrite)
      Creates a new file, with the content of the specified file.

      Code Samples

       boolean overwrite = false; // Default behavior
       client.uploadFromFile(filePath, overwrite)
           .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
           .subscribe(completion -> System.out.println("Upload from file succeeded"));
       
      Parameters:
      filePath - Path to the upload file
      overwrite - Whether to overwrite, should the file already exist.
      Returns:
      An empty response
      Throws:
      UncheckedIOException - If an I/O error occurs
    • uploadFromFile

      public Mono<Void> uploadFromFile(String filePath, ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map<String,String> metadata, DataLakeRequestConditions requestConditions)
      Creates a new file, with the content of the specified file.

      To avoid overwriting, pass "*" to DataLakeRequestConditions.setIfNoneMatch(String).

      Code Samples

       PathHttpHeaders headers = new PathHttpHeaders()
           .setContentMd5("data".getBytes(StandardCharsets.UTF_8))
           .setContentLanguage("en-US")
           .setContentType("binary");
      
       Map<String, String> metadata = Collections.singletonMap("metadata", "value");
       DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
           .setLeaseId(leaseId)
           .setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
       Long blockSize = 100L * 1024L * 1024L; // 100 MB;
       ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize);
      
       client.uploadFromFile(filePath, parallelTransferOptions, headers, metadata, requestConditions)
           .doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
           .subscribe(completion -> System.out.println("Upload from file succeeded"));
       
      Parameters:
      filePath - Path to the upload file
      parallelTransferOptions - ParallelTransferOptions to use to upload from file. Number of parallel transfers parameter is ignored.
      headers - PathHttpHeaders
      metadata - Metadata to associate with the resource. If there is leading or trailing whitespace in any metadata key or value, it must be removed or encoded.
      requestConditions - DataLakeRequestConditions
      Returns:
      An empty response
      Throws:
      UncheckedIOException - If an I/O error occurs
    • append

      public Mono<Void> append(Flux<ByteBuffer> data, long fileOffset, long length)
      Appends data to the specified resource to later be flushed (written) by a call to flush

      Code Samples

       client.append(data, offset, length)
           .subscribe(
               response -> System.out.println("Append data completed"),
               error -> System.out.printf("Error when calling append data: %s", error));
       

      For more information, see the Azure Docs

      Parameters:
      data - The data to write to the file.
      fileOffset - The position where the data is to be appended.
      length - The exact length of the data. It is important that this value match precisely the length of the data emitted by the Flux.
      Returns:
      A reactive response signalling completion.
    • appendWithResponse

      public Mono<Response<Void>> appendWithResponse(Flux<ByteBuffer> data, long fileOffset, long length, byte[] contentMd5, String leaseId)
      Appends data to the specified resource to later be flushed (written) by a call to flush

      Code Samples

       FileRange range = new FileRange(1024, 2048L);
       DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
       byte[] contentMd5 = new byte[0]; // Replace with valid md5
      
       client.appendWithResponse(data, offset, length, contentMd5, leaseId).subscribe(response ->
           System.out.printf("Append data completed with status %d%n", response.getStatusCode()));
       

      For more information, see the Azure Docs

      Parameters:
      data - The data to write to the file.
      fileOffset - The position where the data is to be appended.
      length - The exact length of the data. It is important that this value match precisely the length of the data emitted by the Flux.
      contentMd5 - An MD5 hash of the content of the data. If specified, the service will calculate the MD5 of the received data and fail the request if it does not match the provided MD5.
      leaseId - By setting lease id, requests will fail if the provided lease does not match the active lease on the file.
      Returns:
      A reactive response signalling completion.
    • flush

      public Mono<PathInfo> flush(long position)
      Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.

      By default this method will not overwrite existing data.

      Code Samples

       client.flush(position).subscribe(response ->
           System.out.println("Flush data completed"));
       

      For more information, see the Azure Docs

      Parameters:
      position - The length of the file after all data has been written.
      Returns:
      A reactive response containing the information of the created resource.
    • flush

      public Mono<PathInfo> flush(long position, boolean overwrite)
      Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.

      Code Samples

       boolean overwrite = true;
       client.flush(position, overwrite).subscribe(response ->
           System.out.println("Flush data completed"));
       

      For more information, see the Azure Docs

      Parameters:
      position - The length of the file after all data has been written.
      overwrite - Whether to overwrite, should data exist on the file.
      Returns:
      A reactive response containing the information of the created resource.
    • flushWithResponse

      public Mono<Response<PathInfo>> flushWithResponse(long position, boolean retainUncommittedData, boolean close, PathHttpHeaders httpHeaders, DataLakeRequestConditions requestConditions)
      Flushes (writes) data previously appended to the file through a call to append. The previously uploaded data must be contiguous.

      Code Samples

       FileRange range = new FileRange(1024, 2048L);
       DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
       byte[] contentMd5 = new byte[0]; // Replace with valid md5
       boolean retainUncommittedData = false;
       boolean close = false;
       PathHttpHeaders httpHeaders = new PathHttpHeaders()
           .setContentLanguage("en-US")
           .setContentType("binary");
       DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
           .setLeaseId(leaseId);
      
       client.flushWithResponse(position, retainUncommittedData, close, httpHeaders,
           requestConditions).subscribe(response ->
           System.out.printf("Flush data completed with status %d%n", response.getStatusCode()));
       

      For more information, see the Azure Docs

      Parameters:
      position - The length of the file after all data has been written.
      retainUncommittedData - Whether uncommitted data is to be retained after the operation.
      close - Whether a file changed event raised indicates completion (true) or modification (false).
      httpHeaders - httpHeaders
      requestConditions - requestConditions
      Returns:
      A reactive response containing the information of the created resource.
    • read

      public Flux<ByteBuffer> read()
      Reads the entire file.

      Code Samples

       ByteArrayOutputStream downloadData = new ByteArrayOutputStream();
       client.read().subscribe(piece -> {
           try {
               downloadData.write(piece.array());
           } catch (IOException ex) {
               throw new UncheckedIOException(ex);
           }
       });
       

      For more information, see the Azure Docs

      Returns:
      A reactive response containing the file data.
    • readWithResponse

      public Mono<FileReadAsyncResponse> readWithResponse(FileRange range, DownloadRetryOptions options, DataLakeRequestConditions requestConditions, boolean getRangeContentMd5)
      Reads a range of bytes from a file.

      Code Samples

       FileRange range = new FileRange(1024, 2048L);
       DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
      
       client.readWithResponse(range, options, null, false).subscribe(response -> {
           ByteArrayOutputStream readData = new ByteArrayOutputStream();
           response.getValue().subscribe(piece -> {
               try {
                   readData.write(piece.array());
               } catch (IOException ex) {
                   throw new UncheckedIOException(ex);
               }
           });
       });
       

      For more information, see the Azure Docs

      Parameters:
      range - FileRange
      options - DownloadRetryOptions
      requestConditions - DataLakeRequestConditions
      getRangeContentMd5 - Whether the contentMD5 for the specified file range should be returned.
      Returns:
      A reactive response containing the file data.
    • readToFile

      public Mono<PathProperties> readToFile(String filePath)
      Reads the entire file into a file specified by the path.

      The file will be created and must not exist, if the file already exists a FileAlreadyExistsException will be thrown.

      Code Samples

       client.readToFile(file).subscribe(response -> System.out.println("Completed download to file"));
       

      For more information, see the Azure Docs

      Parameters:
      filePath - A String representing the filePath where the downloaded data will be written.
      Returns:
      A reactive response containing the file properties and metadata.
    • readToFile

      public Mono<PathProperties> readToFile(String filePath, boolean overwrite)
      Reads the entire file into a file specified by the path.

      If overwrite is set to false, the file will be created and must not exist, if the file already exists a FileAlreadyExistsException will be thrown.

      Code Samples

       boolean overwrite = false; // Default value
       client.readToFile(file, overwrite).subscribe(response -> System.out.println("Completed download to file"));
       

      For more information, see the Azure Docs

      Parameters:
      filePath - A String representing the filePath where the downloaded data will be written.
      overwrite - Whether to overwrite the file, should the file exist.
      Returns:
      A reactive response containing the file properties and metadata.
    • readToFileWithResponse

      public Mono<Response<PathProperties>> readToFileWithResponse(String filePath, FileRange range, ParallelTransferOptions parallelTransferOptions, DownloadRetryOptions options, DataLakeRequestConditions requestConditions, boolean rangeGetContentMd5, Set<OpenOption> openOptions)
      Reads the entire file into a file specified by the path.

      By default the file will be created and must not exist, if the file already exists a FileAlreadyExistsException will be thrown. To override this behavior, provide appropriate OpenOptions

      Code Samples

       FileRange fileRange = new FileRange(1024, 2048L);
       DownloadRetryOptions downloadRetryOptions = new DownloadRetryOptions().setMaxRetryRequests(5);
       Set<OpenOption> openOptions = new HashSet<>(Arrays.asList(StandardOpenOption.CREATE_NEW,
           StandardOpenOption.WRITE, StandardOpenOption.READ)); // Default options
      
       client.readToFileWithResponse(file, fileRange, null, downloadRetryOptions, null, false, openOptions)
           .subscribe(response -> System.out.println("Completed download to file"));
       

      For more information, see the Azure Docs

      Parameters:
      filePath - A String representing the filePath where the downloaded data will be written.
      range - FileRange
      parallelTransferOptions - ParallelTransferOptions to use to download to file. Number of parallel transfers parameter is ignored.
      options - DownloadRetryOptions
      requestConditions - DataLakeRequestConditions
      rangeGetContentMd5 - Whether the contentMD5 for the specified file range should be returned.
      openOptions - OpenOptions to use to configure how to open or create the file.
      Returns:
      A reactive response containing the file properties and metadata.
      Throws:
      IllegalArgumentException - If blockSize is less than 0 or greater than 100MB.
      UncheckedIOException - If an I/O error occurs.
    • rename

      public Mono<DataLakeFileAsyncClient> rename(String destinationFileSystem, String destinationPath)
      Moves the file to another location within the file system. For more information see the Azure Docs.

      Code Samples

       DataLakeFileAsyncClient renamedClient = client.rename(fileSystemName, destinationPath).block();
       System.out.println("Directory Client has been renamed");
       
      Parameters:
      destinationFileSystem - The file system of the destination within the account. null for the current file system.
      destinationPath - Relative path from the file system to rename the file to, excludes the file system name. For example if you want to move a file with fileSystem = "myfilesystem", path = "mydir/hello.txt" to another path in myfilesystem (ex: newdir/hi.txt) then set the destinationPath = "newdir/hi.txt"
      Returns:
      A Mono containing a DataLakeFileAsyncClient used to interact with the new file created.
    • renameWithResponse

      public Mono<Response<DataLakeFileAsyncClient>> renameWithResponse(String destinationFileSystem, String destinationPath, DataLakeRequestConditions sourceRequestConditions, DataLakeRequestConditions destinationRequestConditions)
      Moves the file to another location within the file system. For more information, see the Azure Docs.

      Code Samples

       DataLakeRequestConditions sourceRequestConditions = new DataLakeRequestConditions()
           .setLeaseId(leaseId);
       DataLakeRequestConditions destinationRequestConditions = new DataLakeRequestConditions();
      
       DataLakeFileAsyncClient newRenamedClient = client.renameWithResponse(fileSystemName, destinationPath,
           sourceRequestConditions, destinationRequestConditions).block().getValue();
       System.out.println("Directory Client has been renamed");
       
      Parameters:
      destinationFileSystem - The file system of the destination within the account. null for the current file system.
      destinationPath - Relative path from the file system to rename the file to, excludes the file system name. For example if you want to move a file with fileSystem = "myfilesystem", path = "mydir/hello.txt" to another path in myfilesystem (ex: newdir/hi.txt) then set the destinationPath = "newdir/hi.txt"
      sourceRequestConditions - DataLakeRequestConditions against the source.
      destinationRequestConditions - DataLakeRequestConditions against the destination.
      Returns:
      A Mono containing a Response whose value contains a DataLakeFileAsyncClient used to interact with the file created.
    • query

      public Flux<ByteBuffer> query(String expression)
      Queries the entire file.

      For more information, see the Azure Docs

      Code Samples

       ByteArrayOutputStream queryData = new ByteArrayOutputStream();
       String expression = "SELECT * from BlobStorage";
       client.query(expression).subscribe(piece -> {
           try {
               queryData.write(piece.array());
           } catch (IOException ex) {
               throw new UncheckedIOException(ex);
           }
       });
       
      Parameters:
      expression - The query expression.
      Returns:
      A reactive response containing the queried data.
    • queryWithResponse

      public Mono<FileQueryAsyncResponse> queryWithResponse(FileQueryOptions queryOptions)
      Queries the entire file.

      For more information, see the Azure Docs

      Code Samples

       String expression = "SELECT * from BlobStorage";
       FileQueryJsonSerialization input = new FileQueryJsonSerialization()
           .setRecordSeparator('\n');
       FileQueryDelimitedSerialization output = new FileQueryDelimitedSerialization()
           .setEscapeChar('\0')
           .setColumnSeparator(',')
           .setRecordSeparator('\n')
           .setFieldQuote('\'')
           .setHeadersPresent(true);
       DataLakeRequestConditions requestConditions = new DataLakeRequestConditions().setLeaseId(leaseId);
       Consumer<FileQueryError> errorConsumer = System.out::println;
       Consumer<FileQueryProgress> progressConsumer = progress -> System.out.println("total file bytes read: "
           + progress.getBytesScanned());
       FileQueryOptions queryOptions = new FileQueryOptions(expression)
           .setInputSerialization(input)
           .setOutputSerialization(output)
           .setRequestConditions(requestConditions)
           .setErrorConsumer(errorConsumer)
           .setProgressConsumer(progressConsumer);
      
       client.queryWithResponse(queryOptions)
           .subscribe(response -> {
               ByteArrayOutputStream queryData = new ByteArrayOutputStream();
               response.getValue().subscribe(piece -> {
                   try {
                       queryData.write(piece.array());
                   } catch (IOException ex) {
                       throw new UncheckedIOException(ex);
                   }
               });
           });
       
      Parameters:
      queryOptions - The query options
      Returns:
      A reactive response containing the queried data.
    • scheduleDeletion

      public Mono<Void> scheduleDeletion(FileScheduleDeletionOptions options)
      Schedules the file for deletion.

      Code Samples

       FileScheduleDeletionOptions options = new FileScheduleDeletionOptions(OffsetDateTime.now().plusDays(1));
      
       client.scheduleDeletion(options)
           .subscribe(r -> System.out.println("File deletion has been scheduled"));
       
      Parameters:
      options - Schedule deletion parameters.
      Returns:
      A reactive response signalling completion.
    • scheduleDeletionWithResponse

      public Mono<Response<Void>> scheduleDeletionWithResponse(FileScheduleDeletionOptions options)
      Schedules the file for deletion.

      Code Samples

       FileScheduleDeletionOptions options = new FileScheduleDeletionOptions(OffsetDateTime.now().plusDays(1));
      
       client.scheduleDeletionWithResponse(options)
           .subscribe(r -> System.out.println("File deletion has been scheduled"));
       
      Parameters:
      options - Schedule deletion parameters.
      Returns:
      A reactive response signalling completion.