Somehow our default time changed from 30 days to ~6 years and going though indexes.conf in $SPLUNKHOME/etc/system/local and it seems that none of the index stanza contain a setting for frozenTimePeriodInSecs so it defaulted to ~6 years. SO I went though and added the line for frozenTimePeriodInSecs = 2592000 to freeze after 30 days.
My questions are:
1. This will delete/drop data older than 30 day correct?
2. Is there any other impact to doing so?
↧
2 easy questions about indexes.conf
↧
How is LZ4 faring so far in 6.3+ compared to gzip for indexer rawdata compression?
Digging through the new stuff in 6.3 in preparation for some upgrades, I see LZ4 compression is available for bucket rawdata journal compression in indexes.conf. Awesome! I'm excited. Splunk bucket data seems like it should be a great fit for LZ4's strengths.
But LZ4 should also incur a measurable hit on storage needs over gzip, and algorithm benchmarks often focus on specific interesting data cases or a broad set of varying data types. Splunk's intake focus is pretty narrow by comparison, so I'm curious to see if anyone has any real-world numbers to throw down yet, since changing to LZ4 should change the calculations for capacity planning.
↧
↧
How to configure a new index via Splunk Web in an indexer clustering environment?
Hi Splunkers!
I have a problem when I'm trying to distribute new indexes made via Splunk Web on the master-node of my indexer cluster.
I already know how to configure new indexes via indexes.conf on the master and how to distribute them via apply cluster-bundle.
This works fine!
Today, I tried to do the same thing via Splunk Web. The new index gets created and is displayed on the master, but when I'm trying to hit deploy, the following message shows:
In handler 'clustermastercontrol': No new bundle will be applied. The master and peers already have this bundle with bundle id = ....
How do I get the configured index onto the cluster peers?
Thanks in advance!
Kind regards,
pyro_wood
↧
After changing data retention from 2 years to 1 year, why did this not free up disk space on my indexer?
Recently, I noticed that the disk on one of my Indexers was nearly full. Currently, all event data is going into the main index and we had all the defaults set for bucket rolling behaviors in the main index. The server has been indexing data for at least two years.
We want to retain searchable event data going back 1 year and are not concerned with archiving beyond that, so I changed the archive policy to be more restrictive (changed frozenTimePeriodInSecs to 31556952 in $SPLUNK_HOME/etc/system/local/indexes.conf). I was expecting that this would free up a lot of space by rolling data older than 1 year out and deleting it, but it didn't. I came back on Monday morning after making this change and barely a dent was made in the amount of free space. There are no cold buckets in my main index's coldpath right now, so my change must have had some effect.
I suspect that this Indexer was incorrectly sized when it was first set up and that has led to this disk space issue. We intake ~2.5 GB/day on this Indexer. The disk is 200 GB in total, with 140 GB set up for the main index (which includes hot/warm/cold buckets)
Do I need to add more drive space and increase the size of my main index in order to fix this problem?
↧
How to configure Splunk to keep _internal data longer than 30 days?
For some reason _internal is only available for the last 30 days even though it has not reached its max size limit stated in indexes.conf. Is there any way to increase the retention time for _internal and if so where?
↧
↧
How do disk space issues occur in Splunk indexers and what are the options to troubleshoot or prevent this from happening?
I was trying to figure it out how disk space issues occur in the indexers and what are the possible outcomes to get on from it?
↧
Defining summary indexes in indexes.conf in a distributed/cluster environment
My app includes the definition of a summary index in indexes.conf. When I am providing a copy of the app for clustered/distributed Splunk Enterprise environments, I like to split the app into two versions: one for the search heads and one for the indexers.
Regarding the summary index definition in indexes.conf, should I include the definition in the search head version of the app or the indexer version of the app? Does it matter either way? Should I only include it in the indexer version if the environment is configured to index the summary data on the indexers (i.e., the outputs.conf is configured to forward summary data to the indexers)?
↧
Why does my Splunk indexer keep running out of space with my current indexes.conf?
I have a 3.2TB volume for hot/warm data on SSD and a 12TB volume for cold data on spinning disk. This is my indexes.conf. What am I doing wrong?
#general
maxWarmDBCount = 300
homePath.maxDataSizeMB = 3200000
coldPath.maxDataSizeMB = 12000000
#Volumes
[volume:caliente]
path = /splunkdata
maxVolumeDataSizeMB = 3200000
[volume:frio]
path = /cold
maxVolumeDataSizeMB = 12000000
# indexes
[_audit]
thawedPath = $SPLUNK_DB/audit/thaweddb
tstatsHomePath = volume:_splunk_summaries/audit/datamodel_summary
homePath = volume:caliente/splunk_indexes/audit/db
coldPath = volume:frio/_audit
[shenanigans]
thawedPath = $SPLUNK_DB/shenanigans/thaweddb
tstatsHomePath = volume:_splunk_summaries/shenanigans/datamodel_summary
maxConcurrentOptimizes = 6
maxHotIdleSecs = 86400
maxDataSize = auto_high_volume
homePath = volume:caliente/splunk_indexes/shenanigans/db
coldPath = volume:frio/shenanigans
↧
Indexes.conf change is not working as expected
Hi,
I have updated the settings as below and restarted Splunk, but it didn't clean up my old data from the indexer. Please find my indexes.conf below
[test]
coldpath = $SPLUNKDB/test/colddb
homepath = $SPLUNKDB/test/db
thawedpath = $SPLUNKDB/test/thaweddb
maxTotalDataSizeMB = 1000000
frozenTimePeriodInSecs = 31556926
Please let me know if you still want to make any changes in the index details.
↧
↧
When searching via REST API in a distributed search environment, why am I getting error "supplied index 'p_uno' missing"?
Hallo,
I have a setup with 2 indexers and a dedicated search head; the indexes.conf file is defined only on the indexers (they are configured as deployment clients with the search head as the deployment server in order to simplify the administration of the settings).
Searching via REST API always returns error message `"supplied index 'p_uno' missing"`. According to this:
https://answers.splunk.com/answers/334974/rest-api-receiverssimple-supplied-index-missing.html
the solution would be to define the indexes also on the search head, i.e. the indexes.conf from the deployment class directory should be copied in etc/system/local.
The question is, how can I stop the search head from saving locally the indexed data, when the indexes.conf file gives also the physical paths pro index?
Thanks
↧
How will Splunk respond if a cold database path is not present when data is going to be rolled from warm to cold?
hi folks,
We have an issue with our cold database filesystem and the estimate to bring it back is around 10 days.
So my question is:
What happens if a cold database path is not present and there is data to be rolled over from warm to cold?
Will warm buckets be kept till it get's hold of the cold database path? or will it be deleted? or will Splunk stop abruptly?
↧
What exactly do you mean by a provider in hunk?
Hi ALL,
I was reading about HUNK on splunk.doc. They mention something about provider , ERP and configuration of this provider in indexes.conf....
Can someone please explain me what exactly is this provider??
Also How HUNK works using ERP??
Appreciate your help on this. Thankyou.
↧
Relocate KVstore files on filesystem
I am extensively using KV Store for a project and have run into an issue with where the files are stored on disk. There is a long sorted history to the disk layout, but I am not delving into that at this time.
The server is a Windows 2012 server running Splunk 6.3.3. The C: drive is 96GB in size (with 9.7GB free) and the F: drive is 2 TB in size (with 1.7 TB free). Splunk is installed to the C: drive as per corporate standards. My KV store is currently 60GB in size. My issue is that the KV store files exist in the default index location at $SPLUNK_HOME$\var\lib\splunk and I do not see any documentation related to relocating the KV store to a different filesystem like is possible with indexes.
I've tried to update the indexes.conf to point it to a different filesystem as follows:
[default]
[kvstore]
homePath = F:\Splunk\var\lib\splunk\kvstore\db
coldPath = F:\Splunk\var\lib\splunk\kvstore\colddb
thawedPath = $SPLUNK_DB\kvstore\thaweddb
However that does not move the mongo directory or the dumps directory under the kvstore directory.
I suspect (but have not tried yet) that updating the kvstore stanza to something like:
[kvstore]
mongoPath = F:\Splunk\var\lib\splunk\kvstore\mongo
dumpsPath = F:\Splunk\var\lib\splunk\kvstore\dumps
might to the trick, but thought I'd check with the larger audience before I go breaking stuff.
So how about it? Has anyone successfully relocated the kv store files by using indexes.conf? If so, how did you do it?
Thanks in advance.
↧
↧
Splunk service won't start after upgrading Palo Alto Networks App for Splunk to 5.0. "Problem parsing indexes.conf: stanza=flowintegrator Required parameter=homePath not configured"
I ran the upgrade to 5.0 of the Palo app and now Splunk won't start. When I try to start the service I get the below error.
Checking prerequisites...
Checking http port [8000]: open
Checking mgmt port [8089]: open
Checking appserver port [127.0.0.1:8065]: open
Checking kvstore port [8191]: open
Checking configuration... Done.
Checking critical directories... Done
Checking indexes...
Problem parsing indexes.conf: stanza=flowintegrator Required parameter=homePath not configured
Validating databases (splunkd validatedb) failed with code '1'. If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue
I looked at the indexes.conf and saw that it was missing the paths to the DB's, so I added them, but it didn't make a difference.
[pan_logs]
maxTotalDataSizeMB = 800000
homePath = /opt/splunk/var/lib/splunk/pan_logs/db
coldPath = /opt/splunk/var/lib/splunk/pan_logs/colddb
thawedPath = /opt/splunk/var/lib/splunk/pan_logs/thaweddb
[flowintegrator]
maxTotalDataSizeMB = 10000
↧
Can anyone provide guidance on my plan to configure cold storage in indexes.conf?
So, I got the 150TB cold, but they are mounted into /mnt/**splunk1**/cold and /mnt/**splunk2**/cold. I figured that may cause issues with the indexers, so I made symlinks to `/opt/splunk/var/lib/splunk/cold` on each of the indexers to prevent issues with which indexer Splunk wants to write to.
I am now thinking about changing the indexes.conf and adding to the volume stanza:
# One Volume for Cold
[volume:cold]
path = /opt/splunk/var/lib/splunk/cold
# 150000GB (150TB)
maxVolumeDataSizeMB = 150000000
Then changing the cold locations from:
`coldPath = volume:primary/defaultdb/colddb`
to
`coldPath = volume:cold/defaultdb/colddb`
The ES definitions are:
`coldPath = $SPLUNK_DB/audit_summarydb/colddb`
I would like to change that too, similar to above:
`coldPath = volume:cold/audit_summarydb/colddb`
Thoughts? Guidance?
↧
My report shows indexed data is being reduced every day. How do I prevent this?
Hi All,
I have a problem with my indexed data.
I have indexed data by index one time on a folder that has many sub folders and files.
When the data was indexed the first day, everything was available.
In my report, you can see my data has been reduced every day. It seems like my data is lost.
Can I keep data for 60 days without reducing my data every day?
![alt text][1]
[1]: /storage/temp/126232-screen-shot-2016-04-26-at-125131-am.png
Please kindly advise me.
Thank you
Sorry for my english
↧
If I have currently have indexes with different cold paths, how do I modify them to use the same path?
So at some point there were some changes and indexes were created with the incorrect cold paths etc.
I have the following in my indexes.conf file;
[default]
[volume:cold]
path = /mounts/splunk_cold
maxVolumeDataSizeMB = 1500000
[volume:home]
path = $SPLUNK_DB
maxVolumeDataSizeMB = 4000000
However, I have at least 15 indexes that are still putting their colddb under the volume:home, so `/indexA/colddb` vs `/indexA/db` and `/Cold/IndexA/colddb`
I can see where I can change things system wide, but how do I go through and modify each current index to use the correct paths (understand I will have to move some data around), but I would like to clean this up, so I know that cold is in the cold volume and not taking up space in the home volume.
Thanks
↧
↧
How to show a deployed index in Splunk Web on a search head to add data?
Hi,
We are using a Splunk Enterprise installation that uses the following:
1 search head, also acts as a deployment server and license manager.
1 indexer, with no gui.
I have created a deployment app on the Search head called test-indexes. It contains a /test-indexes/default/**indexes.conf**
In **indexes.conf** I have created an index called [test] with the default bucket paths, maxdatasize and maxtotaldatasize attributes.
The index has been deployed on the indexer, and is visible in opt/splunk/var/lib/splunk directory. both in test.dat and test directory.
**My issue** is that even though the index is deployed, there is no way for me to be able to add data to the index from the search head.
It does not exist in the **settings->indexes** view in Splunk Web (search head).
How can I resolve this issue?
// Daniel
↧
Why do I get "Invalid key in stanza [tcp-ssl://:1470] ... connection_host=dns your indexes and inputs are not internally consistent"?
Hello,
Our `/opt/splunk/etc/apps/search/local/inputs.conf` file on our forwarder contains:
[tcp-ssl://:1470]
connection_host=dns
sourcetype=apm_log
index=security_logs
queueSize=5MB
When starting the forwarder, I get:
checking for conf file problems:...
invalid key in stanza [tcp-ssl://:1470] in /opt/splunk/etc/apps/search/local/inputs.conf ...connection_host=dns
your indexes and inputs are not internally consistent.
btool output offers no additional information.
Can anyone offer advice?
Thank you so much.
msantich
↧
How do I configure a data retention policy and a working script for my indexes?
Hi,
I want to create a data retention policy for my all indexes, but I don't know how to configure this:
- coldToFrozenDir = "<path to frozen archive>"
- coldToFrozenScript =["<path to program that runs script>"] "<path to script>"
But how do we add this in the indexes.conf file? Can somebody give me the idea of doing this?
Thanks in advance.
↧