Class: Datadog::Profiling::Exporter

Inherits:
Object
  • Object
show all
Defined in:
lib/datadog/profiling/exporter.rb

Overview

Exports profiling data gathered by the multiple recorders in a ‘Flush`.

@ivoanjo: Note that the recorder that gathers pprof data is special, since we use its start/finish/empty? to decide if there’s data to flush, as well as the timestamp for that data. I could’ve made the whole design more generic, but I’m unsure if we’ll ever have more than a handful of recorders, so I’ve decided to make it specific until we actually need to support more recorders.

Constant Summary collapse

PROFILE_DURATION_THRESHOLD_SECONDS =

Profiles with duration less than this will not be reported

1

Instance Method Summary collapse

Constructor Details

#initialize(pprof_recorder:, worker:, info_collector:, code_provenance_collector:, internal_metadata:, minimum_duration_seconds: PROFILE_DURATION_THRESHOLD_SECONDS, time_provider: Time) ⇒ Exporter

Returns a new instance of Exporter.

[View source]

33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
# File 'lib/datadog/profiling/exporter.rb', line 33

def initialize(
  pprof_recorder:,
  worker:,
  info_collector:,
  code_provenance_collector:,
  internal_metadata:,
  minimum_duration_seconds: PROFILE_DURATION_THRESHOLD_SECONDS,
  time_provider: Time
)
  @pprof_recorder = pprof_recorder
  @worker = worker
  @code_provenance_collector = code_provenance_collector
  @minimum_duration_seconds = minimum_duration_seconds
  @time_provider = time_provider
  @last_flush_finish_at = nil
  @created_at = time_provider.now.utc
   = 
  # NOTE: At the time of this comment collected info does not change over time so we'll hardcode
  #       it on startup to prevent serializing the same info on every flush.
  @info_json = JSON.fast_generate(info_collector.info).freeze
end

Instance Method Details

#can_flush?Boolean

Returns:

  • (Boolean)
[View source]

90
91
92
# File 'lib/datadog/profiling/exporter.rb', line 90

def can_flush?
  !duration_below_threshold?(last_flush_finish_at || created_at, time_provider.now.utc)
end

#flushObject

[View source]

55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
# File 'lib/datadog/profiling/exporter.rb', line 55

def flush
  worker_stats = @worker.stats_and_reset_not_thread_safe
  serialization_result = pprof_recorder.serialize
  return if serialization_result.nil?

  start, finish, compressed_pprof, profile_stats = serialization_result
  @last_flush_finish_at = finish

  if duration_below_threshold?(start, finish)
    Datadog.logger.debug("Skipped exporting profiling events as profile duration is below minimum")
    return
  end

  uncompressed_code_provenance = code_provenance_collector.refresh.generate_json if code_provenance_collector

  Flush.new(
    start: start,
    finish: finish,
    pprof_file_name: Datadog::Profiling::Ext::Transport::HTTP::PPROF_DEFAULT_FILENAME,
    pprof_data: compressed_pprof.to_s,
    code_provenance_file_name: Datadog::Profiling::Ext::Transport::HTTP::CODE_PROVENANCE_FILENAME,
    code_provenance_data: uncompressed_code_provenance,
    tags_as_array: Datadog::Profiling::TagBuilder.call(settings: Datadog.configuration).to_a,
    internal_metadata: .merge(
      {
        worker_stats: worker_stats,
        profile_stats: profile_stats,
        recorder_stats: pprof_recorder.stats,
        gc: GC.stat,
      }
    ),
    info_json: info_json,
  )
end

#reset_after_forkObject

[View source]

94
95
96
97
# File 'lib/datadog/profiling/exporter.rb', line 94

def reset_after_fork
  @last_flush_finish_at = time_provider.now.utc
  nil
end