Course
This procedure is a part of a course that teaches you how to build a quickstart. If you haven't already, checkout the course introduction.
Each procedure in this course builds on top of the last one, so make sure you've completed the last procedure, send events from your product before proceeding with this one.
Logs are generated by applications. They are time-based text records that help your users see what's happening in your system.
New Relic, provides you a variety of ways to instrument your application to send logs to our Logs API.
In this lesson, you learn to send logs from your product using our telemetry software development kit (SDK).
Use our SDK
We offer an open source telemetry SDK in several of the most popular programming languages. These send data to our data ingest APIs, including our Log API. Of these language SDKs, Python and Java work with the Log API.
In this lesson, you learn how to install and use the Python telemetry SDK to send logs to New Relic.
Change to the send-logs/flashDB direcrory of the course repository.
$cd ../../send-events/flashDB
If you haven't already, install the newrelic-telemetry-sdk
package.
$pip install newrelic-telemetry-sdk
Open db.py file in the IDE of your choice and configure the LogClient
.
1import os2import random3import datetime4from sys import getsizeof5
6from newrelic_telemetry_sdk import MetricClient, GaugeMetric, CountMetric, SummaryMetric7from newrelic_telemetry_sdk import EventClient, Event8from newrelic_telemetry_sdk import LogClient9
10metric_client = MetricClient(os.environ["NEW_RELIC_LICENSE_KEY"])11event_client = EventClient(os.environ["NEW_RELIC_LICENSE_KEY"])12log_client = LogClient(os.environ["NEW_RELIC_LICENSE_KEY"])13
14db = {}15stats = {16 "read_response_times": [],17 "read_errors": 0,18 "read_count": 0,19 "create_response_times": [],20 "create_errors": 0,21 "create_count": 0,22 "update_response_times": [],23 "update_errors": 0,24 "update_count": 0,25 "delete_response_times": [],26 "delete_errors": 0,27 "delete_count": 0,28 "cache_hit": 0,29}30last_push = {31 "read": datetime.datetime.now(),32 "create": datetime.datetime.now(),33 "update": datetime.datetime.now(),34 "delete": datetime.datetime.now(),35}36
37def read(key):38
39 print(f"Reading...")40
41 if random.randint(0, 30) > 10:42 stats["cache_hit"] += 143
44 stats["read_response_times"].append(random.uniform(0.5, 1.0))45 if random.choice([True, False]):46 stats["read_errors"] += 147 stats["read_count"] += 148 try_send("read")49
50def create(key, value):51
52 print(f"Writing...")53
54 db[key] = value55 stats["create_response_times"].append(random.uniform(0.5, 1.0))56 if random.choice([True, False]):57 stats["create_errors"] += 158 stats["create_count"] += 159 try_send("create")60
61def update(key, value):62
63 print(f"Updating...")64
65 db[key] = value66 stats["update_response_times"].append(random.uniform(0.5, 1.0))67 if random.choice([True, False]):68 stats["update_errors"] += 169 stats["update_count"] += 170 try_send("update")71
72def delete(key):73
74 print(f"Deleting...")75
76 db.pop(key, None)77 stats["delete_response_times"].append(random.uniform(0.5, 1.0))78 if random.choice([True, False]):79 stats["delete_errors"] += 180 stats["delete_count"] += 181 try_send("delete")82
83def try_send(type_):84
85 print("try_send")86
87 now = datetime.datetime.now()88 interval_ms = (now - last_push[type_]).total_seconds() * 100089 if interval_ms >= 2000:90 send_metrics(type_, interval_ms)91 send_event(type_)92
93def send_metrics(type_, interval_ms):94 95 print("sending metrics...")96
97 keys = GaugeMetric("fdb_keys", len(db))98 db_size = GaugeMetric("fdb_size", getsizeof(db))99
100 errors = CountMetric(101 name=f"fdb_{type_}_errors",102 value=stats[f"{type_}_errors"],103 interval_ms=interval_ms104 )105
106 cache_hits = CountMetric(107 name=f"fdb_cache_hits",108 value=stats["cache_hit"],109 interval_ms=interval_ms110 )111
112 response_times = stats[f"{type_}_response_times"]113 response_time_summary = SummaryMetric(114 f"fdb_{type_}_responses",115 count=len(response_times),116 min=min(response_times),117 max=max(response_times),118 sum=sum(response_times),119 interval_ms=interval_ms,120 )121
122 batch = [keys, db_size, errors, cache_hits, response_time_summary]123 response = metric_client.send_batch(batch)124 response.raise_for_status()125 print("Sent metrics successfully!")126 clear(type_)127
128def send_event(type_):129
130 print("sending event...")131
132 count = Event(133 "fdb_method", {"method": type_}134 )135
136 response = event_client.send_batch(count)137 response.raise_for_status()138 print("Event sent successfully!")139
140def clear(type_):141 stats[f"{type_}_response_times"] = []142 stats[f"{type_}_errors"] = 0143 stats["cache_hit"] = 0144 stats[f"{type_}_count"] = 0145 last_push[type_] = datetime.datetime.now()
Important
This example expects an environment variable called $NEW_RELIC_LICENSE_KEY
.
Instrument your app to send logs to New Relic.
1import os2import random3import datetime4from sys import getsizeof5import psutil6
7from newrelic_telemetry_sdk import MetricClient, GaugeMetric, CountMetric, SummaryMetric8from newrelic_telemetry_sdk import EventClient, Event9from newrelic_telemetry_sdk import LogClient, Log10
11metric_client = MetricClient(os.environ["NEW_RELIC_LICENSE_KEY"])12event_client = EventClient(os.environ["NEW_RELIC_LICENSE_KEY"])13log_client = LogClient(os.environ["NEW_RELIC_LICENSE_KEY"])14
15db = {}16stats = {17 "read_response_times": [],18 "read_errors": 0,19 "read_count": 0,20 "create_response_times": [],21 "create_errors": 0,22 "create_count": 0,23 "update_response_times": [],24 "update_errors": 0,25 "update_count": 0,26 "delete_response_times": [],27 "delete_errors": 0,28 "delete_count": 0,29 "cache_hit": 0,30}31last_push = {32 "read": datetime.datetime.now(),33 "create": datetime.datetime.now(),34 "update": datetime.datetime.now(),35 "delete": datetime.datetime.now(),36}37
38def read(key):39
40 print(f"Reading...")41
42 if random.randint(0, 30) > 10:43 stats["cache_hit"] += 144
45 stats["read_response_times"].append(random.uniform(0.5, 1.0))46 if random.choice([True, False]):47 stats["read_errors"] += 148 stats["read_count"] += 149 try_send("read")50
51def create(key, value):52
53 print(f"Writing...")54
55 db[key] = value56 stats["create_response_times"].append(random.uniform(0.5, 1.0))57 if random.choice([True, False]):58 stats["create_errors"] += 159 stats["create_count"] += 160 try_send("create")61
62def update(key, value):63
64 print(f"Updating...")65
66 db[key] = value67 stats["update_response_times"].append(random.uniform(0.5, 1.0))68 if random.choice([True, False]):69 stats["update_errors"] += 170 stats["update_count"] += 171 try_send("update")72
73def delete(key):74
75 print(f"Deleting...")76
77 db.pop(key, None)78 stats["delete_response_times"].append(random.uniform(0.5, 1.0))79 if random.choice([True, False]):80 stats["delete_errors"] += 181 stats["delete_count"] += 182 try_send("delete")83
84def try_send(type_):85
86 print("try_send")87
88 now = datetime.datetime.now()89 interval_ms = (now - last_push[type_]).total_seconds() * 100090 if interval_ms >= 2000:91 send_metrics(type_, interval_ms)92 send_event(type_)93
94def send_metrics(type_, interval_ms):95 96 print("sending metrics...")97
98 keys = GaugeMetric("fdb_keys", len(db))99 db_size = GaugeMetric("fdb_size", getsizeof(db))100
101 errors = CountMetric(102 name=f"fdb_{type_}_errors",103 value=stats[f"{type_}_errors"],104 interval_ms=interval_ms105 )106
107 cache_hits = CountMetric(108 name=f"fdb_cache_hits",109 value=stats["cache_hit"],110 interval_ms=interval_ms111 )112
113 response_times = stats[f"{type_}_response_times"]114 response_time_summary = SummaryMetric(115 f"fdb_{type_}_responses",116 count=len(response_times),117 min=min(response_times),118 max=max(response_times),119 sum=sum(response_times),120 interval_ms=interval_ms,121 )122
123 batch = [keys, db_size, errors, cache_hits, response_time_summary]124 response = metric_client.send_batch(batch)125 response.raise_for_status()126 print("Sent metrics successfully!")127 clear(type_)128
129def send_event(type_):130
131 print("sending event...")132
133 count = Event(134 "fdb_method", {"method": type_}135 )136
137 response = event_client.send_batch(count)138 response.raise_for_status()139 print("Event sent successfully!")140
141def send_logs():142
143 print("sending log...")144
145 process = psutil.Process(os.getpid())146 memory_usage = process.memory_percent()147
148 log = Log("FlashDB is using " + str(round(memory_usage * 100, 2)) + "% memory")149
150 response = log_client.send(log)151 response.raise_for_status()152 print("Log sent successfully!")153
154def clear(type_):155 stats[f"{type_}_response_times"] = []156 stats[f"{type_}_errors"] = 0157 stats["cache_hit"] = 0158 stats[f"{type_}_count"] = 0159 last_push[type_] = datetime.datetime.now()
Here, you instrument your platform to send memory_usage
as log to New Relic.
Amend the try_send
module to send the logs every 2 second.
1import os2import random3import datetime4from sys import getsizeof5import psutil6
7from newrelic_telemetry_sdk import MetricClient, GaugeMetric, CountMetric, SummaryMetric8from newrelic_telemetry_sdk import EventClient, Event9from newrelic_telemetry_sdk import LogClient, Log10
11metric_client = MetricClient(os.environ["NEW_RELIC_LICENSE_KEY"])12event_client = EventClient(os.environ["NEW_RELIC_LICENSE_KEY"])13log_client = LogClient(os.environ["NEW_RELIC_LICENSE_KEY"])14
15db = {}16stats = {17 "read_response_times": [],18 "read_errors": 0,19 "read_count": 0,20 "create_response_times": [],21 "create_errors": 0,22 "create_count": 0,23 "update_response_times": [],24 "update_errors": 0,25 "update_count": 0,26 "delete_response_times": [],27 "delete_errors": 0,28 "delete_count": 0,29 "cache_hit": 0,30}31last_push = {32 "read": datetime.datetime.now(),33 "create": datetime.datetime.now(),34 "update": datetime.datetime.now(),35 "delete": datetime.datetime.now(),36}37
38def read(key):39
40 print(f"Reading...")41
42 if random.randint(0, 30) > 10:43 stats["cache_hit"] += 144
45 stats["read_response_times"].append(random.uniform(0.5, 1.0))46 if random.choice([True, False]):47 stats["read_errors"] += 148 stats["read_count"] += 149 try_send("read")50
51def create(key, value):52
53 print(f"Writing...")54
55 db[key] = value56 stats["create_response_times"].append(random.uniform(0.5, 1.0))57 if random.choice([True, False]):58 stats["create_errors"] += 159 stats["create_count"] += 160 try_send("create")61
62def update(key, value):63
64 print(f"Updating...")65
66 db[key] = value67 stats["update_response_times"].append(random.uniform(0.5, 1.0))68 if random.choice([True, False]):69 stats["update_errors"] += 170 stats["update_count"] += 171 try_send("update")72
73def delete(key):74
75 print(f"Deleting...")76
77 db.pop(key, None)78 stats["delete_response_times"].append(random.uniform(0.5, 1.0))79 if random.choice([True, False]):80 stats["delete_errors"] += 181 stats["delete_count"] += 182 try_send("delete")83
84def try_send(type_):85
86 print("try_send")87
88 now = datetime.datetime.now()89 interval_ms = (now - last_push[type_]).total_seconds() * 100090 if interval_ms >= 2000:91 send_metrics(type_, interval_ms)92 send_event(type_)93 send_logs()94
95def send_metrics(type_, interval_ms):96 97 print("sending metrics...")98
99 keys = GaugeMetric("fdb_keys", len(db))100 db_size = GaugeMetric("fdb_size", getsizeof(db))101
102 errors = CountMetric(103 name=f"fdb_{type_}_errors",104 value=stats[f"{type_}_errors"],105 interval_ms=interval_ms106 )107
108 cache_hits = CountMetric(109 name=f"fdb_cache_hits",110 value=stats["cache_hit"],111 interval_ms=interval_ms112 )113
114 response_times = stats[f"{type_}_response_times"]115 response_time_summary = SummaryMetric(116 f"fdb_{type_}_responses",117 count=len(response_times),118 min=min(response_times),119 max=max(response_times),120 sum=sum(response_times),121 interval_ms=interval_ms,122 )123
124 batch = [keys, db_size, errors, cache_hits, response_time_summary]125 response = metric_client.send_batch(batch)126 response.raise_for_status()127 print("Sent metrics successfully!")128 clear(type_)129
130def send_event(type_):131
132 print("sending event...")133
134 count = Event(135 "fdb_method", {"method": type_}136 )137
138 response = event_client.send_batch(count)139 response.raise_for_status()140 print("Event sent successfully!")141
142def send_logs():143
144 print("sending log...")145
146 process = psutil.Process(os.getpid())147 memory_usage = process.memory_percent()148
149 log = Log("FlashDB is using " + str(round(memory_usage * 100, 2)) + "% memory")150
151 response = log_client.send(log)152 response.raise_for_status()153 print("Log sent successfully!")154
155def clear(type_):156 stats[f"{type_}_response_times"] = []157 stats[f"{type_}_errors"] = 0158 stats["cache_hit"] = 0159 stats[f"{type_}_count"] = 0160 last_push[type_] = datetime.datetime.now()
Your platform will now report the configured logs every 2 seconds.
Navigate to the root of your application at build-a-quickstart-lab/send-logs/flashDB.
Run your services to verify that it is reporting logs.
$python simulator.pyWriting...try_sendReading...try_sendReading...try_sendWriting...try_sendWriting...try_sendReading...sending metrics...Sent metrics successfully!sending event...Event sent successfully!sending log...Log sent successfully!
Alternative Options
If the language SDK doesn’t fit your needs, try out one of our other options:
- New Relic offers a variety of log forwarding solutions that allow you to collect logs from operating systems, cloud platforms including Amazon AWS, [Google Cloud Platform], Microsoft Azure, and Heroku, Kubernetes, Docker, and APM.
- Manual Implementation: If the previous options don’t fit your requirements, you can always manually instrument your own library to make a POST request to the New Relic Log API.
In this procedure, you instrumented your service to send logs to New Relic. Next, instrument it to send traces.
Course
This procedure is a part of course that teaches you how to build a quickstart. Continue to next lesson, send traces from your product.